Loading...
Loading...

Last Friday, Secretary of Defense Pete Hegseth announced that he was breaking the Pentagon’s contract with the A.I. company Anthropic and would declare the company a supply chain risk — a designation for companies so dangerous, they can’t exist anywhere in the U.S. military supply chain. What makes this so wild is the military is still using Anthropic’s A.I. system right now. They reportedly used it during the raid to capture Maduro in Venezuela, and are now using it in the war in Iran.
This story raises so many questions: Why does the government think Anthropic is so dangerous? How exactly is the government using A.I. right now? How do they want to use A.I.? And who should ultimately control this powerful and uncertain technology?
Dean Ball is a senior fellow at the Foundation for American Innovation and the author of the newsletter Hyperdimensional. He served as a senior policy adviser on A.I. for the Trump White House and was the primary staff writer of their A.I. action plan. But he’s been furious at the Trump administration for how it has been handling the conflict with Anthropic. So I wanted to have him on the show to explain why.
Mentioned:
“Hyperdimensional" by Dean Ball
“What if Dario Amodei Is Right About A.I.?” The Ezra Klein Show
“Stratechery” by Ben Thompson
Book Recommendations:
Rationalism in Politics and Other Essays by Michael Oakeshott
Empire Of Liberty by Gordon S. Wood
Roll, Jordan, Roll by Eugene D. Genovese
Thoughts? Guest suggestions? Email us at [email protected].
You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.
This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris with Kate Sinclair and Mary Marge Locker. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Emma Kehlbeck, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser.
Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
So right now, everyone is thinking about Iran.
But there is this story happening around it that I think we need to not lose sight of.
Because it's about not just how we are fighting this war, but how we're going to be fighting
all wars going forward.
On Friday of last week, Secretary of Defense Pete Hegg said announced that he was breaking
the government's contract with the AI company Anthropic, and not just that, he intended
to designate them a supply chain risk.
The supply chain risk designation is for technologies so dangerous, they cannot exist
anywhere in the US military supply chain.
They cannot be used by any contractor or any subcontractor anywhere in that chain.
It has been used before for technologies produced by foreign companies like China's Huawei,
where we fear espionage or losing access to critical capabilities during a conflict.
It has never been used against an American company.
What is even wilder about this is that it is being used, released, being threatened against
an American company that is even now providing services to the US military as we speak.
Anthropics AI system Claude was used in the raid against Nicolas Maduro, and it is reportedly
being used in the war with Iran.
But there were red lines that Anthropic would not allow the Department of War to cross.
The one that led to the disintegration of their relationship was over using AI systems
to surveil the American people, using commercially available data.
So what is going on here?
How does the government want to use these AI systems, and what does it mean that they are
trying to destroy one of America's leading AI companies for setting some conditions
on how these new, powerful, and uncertain technologies can be deployed?
My guest today is Dean Ball.
Dean is a senior fellow at the Foundation for American Innovation and author of the News
of our Hyperdimensional.
He was also a senior policy advisor on AI for the Trump White House, and was the primary
writer of their AI action plan, but he has been furious at what they are doing here.
As always, my email is reclineshowatmytimes.com.
Dean Ball.
Welcome to the show.
Thanks so much for having me.
So I want you to walk me through the timeline here.
How did we get to the point where the Department of War is labeling Anthropic one of America's
leading AI companies a supply chain risk?
The timeline really begins in the summer of 2024 during the Biden administration, when
the Department of Defense, now Department of War, and Anthropic came to an agreement
for the use of Claude in classified settings.
Basically, language models are used in government agencies, including the Department of Defense
in unclassified settings for things like reviewing contracts and navigating procurement
rules and mundane things like that.
But there are these classified uses, which include intelligence analysis and potentially
assisting operations in real time, military operations in real time.
And Anthropic was the company most enthusiastic about these national security uses, and they
came to an agreement with the Biden administration to do this with a couple of usage restrictions.
Domestic mass surveillance was a prohibited use and fully autonomously thought weapons.
In the summer of 2025 during the Trump administration and full disclosure, I was in the Trump administration
when this happened, though not at all involved in this deal.
The administration made the decision to expand that contract and kept the same terms.
So the Trump administration agreed to those restrictions as well.
And then in the fall of 2025, I suspect that this correlates with the Senate confirmation
of Emil Michael under Secretary of War for Research and Engineering.
He comes in, he looks at these things, or perhaps is involved in looking at these things
and comes to the conclusion that no, we cannot be bound by these usage restrictions.
The objection is not so much to the substance of the restrictions, but to the idea of usage restrictions
in general.
So that conflict actually begins several months ago.
And as far as I understand it begins before the raid in Venezuela, on Nicolas Maduro and all
that kind of stuff.
But these military operations may be increased the intensity because Anthropics models are
used during that raid.
And then we get to the point where, you know, basically where we are now, where the contract
has kind of fallen apart and DOW, Department of War and Anthropic have come to the conclusion
that they can't do business with one another.
And the punishment is the real question here, I think.
And do you want to explain what the punishment is?
Yeah.
So basically, the Department of War saying we don't want usage restrictions of this kind
as a principle.
That seems fine to me.
That seems perfectly reasonable for them to say, no, a private company shouldn't determine,
you know, Dario Amidate does not get to decide when autonomously the weapons are ready
for prime time.
That's a Department of War decision.
That's a decision that political leaders will make.
And I think that's right.
I agree with the Trump administration on that front.
So I think the solution to this is if you cannot agree to terms of business, what typically
happens is you cancel the contract and you don't transact anymore money.
You don't have commercial relations, but the punishment that Secretary of War Pete
Hegseth has said he is going to issue is to declare Anthropic a supply chain risk,
which is typically reserved only for foreign adversaries.
What Secretary Hegseth has said is that he wants to prevent Department of War contractors.
And I'm, by the way, I'm going to refer to it variously as Department of Defense and
Department of War because I have a 20.
I still call X Twitter.
Yeah.
I still call X Twitter, right?
So it's just a inconsistency of mine anyway, all military contractors can be prevented
from doing any commercial relations in Secretary Hegseth mine with Anthropic.
I don't think they actually have that a power.
I don't think they actually have that statutory power.
The maximum of what I think you could do is say no Department of War contractor can
use Claude in their fulfillment of a military contract, but you can't say you can't have
any commercial relations with them.
I don't think.
But that is what Secretary Hegseth has claimed he is going to do, which would be existential
for the company if he actually does it.
Okay.
There's a lot in here.
Yes.
I want to expand on.
But I want to start here.
For most people, they use chatbots sometimes, if at all.
And their experience with them is that they are pretty good at some things and not at others.
And we're not all that good in June of 2024 when the Biden administration is making
this deal.
So here you are telling me that we are integrating, in this case, Claude, throughout the national
security infrastructure.
It's involved somehow in the Radon Nicholas Maduro how and to what degree should the public
trust that the federal government knows how to do this well with systems that even the
people building them don't understand all that well.
Yeah.
I think one thing is that you have to learn by doing.
So it is the case that we don't know how to integrate AI really into any organization,
right?
Advanced AI systems.
We don't know how to integrate them into complex preexisting workflows.
And so the way you do it is learning by doing.
Didn't Pete Hegseth have posters around the Department of War saying the Secretary wants
you to use AI?
They are very enthusiastic about AI adoption, right?
So here's how I would think about what these systems can do in national security context.
First of all, there's a long standing issue that the intelligence community collects more
data than it can possibly analyze.
I remember seeing something from I forget which intelligence agency, but one of them that
essentially said that they collect so much data every year.
But this one that they would need 8 million intelligence analysts to properly process
all of it.
That's just one agency and that's far more employees than the federal government as a whole has.
And what can AI do?
Well, you can automate a lot of that analysis.
So transcribing text and then analyzing that text signals intelligence processing things
like this, right?
That's one area.
Sometimes that needs to be done in real time for an ongoing military operation.
So that might be a good example.
And then another area, of course, is these models have gotten quite good.
It's off-board engineering.
And so there are cyber defensive and cyber offensive operations where they can deliver
tremendous utility.
Let's talk about mass surveillance here because my understanding talking to people on both
sides of this, and it's now been, I think, fairly widely reported, that this contract
fell apart over mass surveillance.
At the final critical moment, Emil Michael goes to Dario and says, we will agree to this
contract, but you need to delete the clause that is prohibiting us from using clause to
analyze bulk collected commercial data.
Yeah.
Why don't you explain what's going on there?
So the first thing I want to say, national security law is filled with gotchas.
It's filled with legal terms of art.
Things that we use colloquially, quite a bit, where the actual statutory definition of
that term is quite different from what you would infer from the colloquial use of the
term.
Things like private, confidential surveillance, these sorts of terms don't necessarily have
the meaning that they do in natural language.
That's true in all law.
All laws have to define terms in certain ways that are not necessarily how we use them in
our normal language.
I think the difference between vernacular and statute here is about as stark as you
can get.
So surveillance is the collection or acquisition of private information, but that doesn't
include commercially available information.
So if you buy something, if you buy a data set of some kind and then you analyze it,
that's not necessarily surveillance under the law.
So if they hack my computer or my phone to see what I'm doing on the internet, that's
surveillance.
That would be surveillance.
But if they buy data, if they put cameras everywhere, that would be surveillance.
But if there are cameras everywhere and they buy the data from the cameras and then
they analyze that data, that might not necessarily be surveillance.
Or if they buy information about everything I'm doing online, which is very available to
advertisers and then use it to create a picture of me, that's not necessarily surveillance.
Where you physically are in the world.
Yeah.
I'll step back for a second and just say that there's a lot of data out there.
There's a lot of information that the world gives off.
Your Google search results, your smartphone location data, right?
All these things.
And the reason that no one really analyzes it in the government is not so much that they
can't acquire it and do so.
It's because they don't have the personnel, right?
They don't have millions and millions of people to like figure out what the average person
is up to.
The problem with AI is that AI gives them that infinitely scalable workforce.
And thus, every law can be enforced to the letter with perfect surveillance over everything,
right?
And that's a scary future.
We think of the space between us and certain forms of tyranny or the feared Panopticon
as a space inhabited by legal protection.
But one thing that has seemed to me to be at the core of a lot of at least fear here
is that it's in fact not just legal protection.
It's actually the government's inability to have the absorption of that level of information
about the public and then do anything with it.
Yes.
And if all of a sudden you radically change the government's ability, then without changing
any laws, you have changed what is possible within those laws.
Yes.
So you were saying, I'm going to go mass surveillance or surveillance at all is a term
of legal art.
But for human beings, it is a condition that you either are operating under or not, right?
And the fear is that as I understand it, either the asses we have right now or the ones
that are coming down the pipe quite soon, would make it possible to use bulk commercial
data to create a picture of the population and what it is doing.
And then the ability to find people and understand them that just go so far beyond where we've
been that it raises privacy questions that the law just did not have to consider until
now.
Yes.
And so the laws are not up to the task of the spirit in which they were passed.
I would step back even further and just say that the entire like technocratic nation
state that we currently have in kind of the advanced capitalist democracies is a technologically
contingent institutional complex.
And the problem that AI presents is that it changes the technological contingencies quite
profoundly.
And so what that suggests is that the entire institutional complex is going to break in
ways that we cannot quite predict.
This is a good example.
In other words, not only is this a major and profound problem, but it is an example of
a major and profound problem of a broader problem space that I think we will be occupying
for the coming decades.
What do you mean by technological contingencies?
Well, I mean, the current nation state could not possibly exist in a world without the
printing press and a world without the ability to write down text and arbitrarily reproduce
it at very low cost.
It couldn't exist without the current telecommunications infrastructure, right?
The nation state needs these tech.
It is built dependent upon the macro inventions of the era in which it was assembled, right?
That's always true for all institutions, all institutions are technologically contingent.
We are having a profoundly technologically contingent conversation right now.
AI changes all of this in ways that are like hard to describe and kind of abstract.
But I think, you know, AI policy, this thing that we call AI policy today is way too focused
on what object level regulations will we apply to the AI systems and the companies that
build them, et cetera, et cetera.
Instead of thinking about this broader question of, wow, there are all these assumptions
we made that are now broken and what are we going to do about them?
Give me examples of those two ways of thinking.
What is an object level regulation or assumption?
And then what are the kinds of laws and regulations we're talking about?
An object level regulation would be to say we are going to require AI companies to do algorithmic
impact assessments to assess whether their models have bias, right?
That's a policy I've criticized quite a bit, by the way.
You could say we're going to require you to do testing for catastrophic risks, right?
Things like that.
You know, that's an important area that we need to think about.
But that's just like one small part of the broader issue of, wow, our entire legal system
is predicated on imperfect enforcement of the law, imperfect enforcement of the law.
We have a huge number of statutes, unbelievably broad sets of laws in many cases.
And the reason it all works is that the government does not enforce those laws, anything
like uniformly.
The problem with AI is that it enables uniform enforcement of the law.
So here's the Pentagon's position.
They're angry at having this unelected CEO, who they have begun describing as like a
woke radical, telling them that their laws aren't good enough and that they cannot be trusted
to interpret them in a manner consistent with the public good, Secretary Pete Hexath tweeted
and speaking here of Anthropic, their true objective is unmistakable to seize veto power
over the operational decisions of the United States military.
That is unacceptable.
Is he right?
I have not seen any evidence that Anthropic is actually trying to seize control at an
operational level.
There's an anecdote that's been reported that apparently Emil Michael and Dario Amadei
had a conversation in which Michael said, if there are hypersonic missiles coming to
the US, would you object to us using autonomous defense systems to destroy those hypersonic
missiles?
And apparently Dario said, you'd have to call us.
I have been told by people in that room that that is not true.
I have been told by people in that room that that did not happen.
And not only that, but that there was a broad speaking exemption for automated missile
defense that would make that irrelevant.
It's exactly right.
I am worried that there's a lot of lying happening here by the Trump administration.
I'm, look, I think that that's probably true.
I think that there's lying happening to be quite candid.
I don't think that Anthropic is trying to assert operational control over militarism.
That being said, at a principle level, I do understand that saying autonomous lethal
weapons are prohibited feels like a public policy.
More than it feels like a contract term.
And so it does feel weird for Anthropic to be setting something that kind of does.
I think if we're being honest, feel like public policy, but I don't think it's as beyond
the paler abnormal as the administration is claiming.
And one way you know that is that the administration agreed to those same terms.
So I think this gets to something important in the cultures of these two sides.
Anthropic is a company that on the one hand has a very strong view.
You can believe their view is right or wrong, but about where this technology is going
and how powerful it is going to be.
And compared to how most people think about AI, and I believe that it's true, even for
most people in the Trump administration, who I think have a somewhat more like AI is
a normal expansion of capabilities view.
The Anthropic view is different.
The Anthropic view is that they're building something truly powerful and different.
And they also have a view of what their technology cannot do reliably yet.
Some of their concern is simply that their systems cannot yet be trusted to do things
like lethal autonomous weapons, which I don't think they believe in the long run should
not ever be done.
Yes, but they don't believe should be done given the technology right now.
They don't want to be responsible for something going wrong.
And on the other hand, they believe that they're building something the current laws do
not fit.
And the view that Dario or anybody wants to control the government, I don't think Dario
should control the government.
On the other hand, I am very sympathetic to if I built something that was powerful and
dangerous and uncertain.
And the government was excitedly buying it for uses that could be very profound in how
they affected people's lives.
I want to be very careful that I didn't sell them something that went horribly fucking
wrong.
And then I am blamed for it by the public and by the government.
That is just like an underrated explanation for some of what is going on here to me.
No, I think this characterization is accurate.
And like I come out of the world of classical liberal think tanks, right?
Like the right of center, libertarian think tank world.
That's my background.
And so deep skepticism of state power is in my DNA.
And it's always funny how it turns out when you just apply these principles because you
will sometimes end up very much on the right and you will sometimes end up on the left.
Because my these principles transcend any sort of tribal politics.
This is like, no, we actually need to be concerned about this.
And I think it's not crazy.
I think if I were in Dario shoes personally, I don't know that I would have done the same
thing.
I think what I would have done is actually said, you know, contractual protections probably
don't do anything for me here.
If I'm being a realist, probably if I give them the tech, they're going to use it for
whatever they want.
So I maybe don't sell them the tech until the legal protections are there.
And I say that out loud.
I say, Congress needs to pass a law about this.
That would be the way I think I would have dealt with it.
But again, it's easy to say that in retrospect, looking back and I, you have to acknowledge
the reality there that what that means is that the US military takes a national security
hit.
The US military has worse national security capability or they work with a company you trust
less.
I think it is a given that anthropic has always framed itself, but no company wanted
this business.
Like no other company, but somebody was going to want it soon.
Someone was going to want it eventually, but no one took it for two years, right?
I think Elon Musk would have happily taken it over the last year.
Sure.
I've been curious about why anthropic rushed into the spaces early as they did.
And they need to do that.
That's sort of my point.
And in general, one of the odd things about them is they're people who are very worried
about what will happen if super intelligence is built and they're the ones racing to build
it fastest and a general interesting cultural dynamic in these labs is they are a little
bit terrified of what they're building.
And so they persuade themselves that they need to be the ones to build it and do it and
run it because they are the lab that truly is worried about safety that is truly worried
about alignment.
And I wonder how much that drove them into this business in the first place.
Yeah.
When I see lab leadership interact with people that have not really made contact with these
ideas before, that's always the question that they keep going back to is then like,
why are you doing this at all?
Basically, their answer is hagaily and right, their answer is like, well, it's inevitable.
It's the we're summoning the world spirit, right?
And so like, yeah, I kind of wonder whether they didn't invite this.
And that would be my main criticism of anthropic is that I kind of think that they invited
this earlier than they needed to by rushing so much into these national security uses.
Because in 2024, Claude was not capable of all that much interest.
I would not have used Claude to help prepare a podcast in 2024.
Yes, precisely, precisely.
So I want to play a clip from Dario talking about this question of whether or not the
laws are capable of regulating the technology we now have.
In terms of these one or two narrow exceptions, I actually agree that in the long run, we
need to have a democratic conversation.
In the long run, I actually do believe that it is Congress's job.
If, for example, there are possibilities with domestic mass surveillance, government
buying of bulk data that has been produced on Americans, locations, personal information,
political affiliation, to build profiles.
And it's now possible to analyze that with AI.
The fact that that's legal, that seems like the judicial interpretation of the Fourth
Amendment has not caught up or the laws passed by Congress have not caught up.
So in the long run, we think Congress should catch up with where the technology is going.
Do you think he's just right about that and maybe the positive way this plays out is
that Congress becomes aware that it needs to act as like the Pentagon, the national security
system has been moving into this much faster than Congress has?
The first thing I want to point out is that when a guy like Dario Amade says in the long
run, what he means is like a year from now, when you say in the long run in DC, that comes
across as meaning like, oh, like 10, 15 years from now, Dario Amade means actually like six
to 12 months from now in the long run or like two to three years maybe is like the very
long run.
I want to point out that like what we're talking about is policy action quite soon.
I think that would be great.
I think that would be great.
And look, I would love it if this triggered an actual healthy conversation and in the NDAA,
we end up with the National Defense Authorization Act.
I apologize.
This is the annual defense policy renewal.
If at the end of the year, Congress passes a law that says, you know, we're going to have
these reasonable thoughtful restrictions and let's propose some tax.
I'd love to see it.
But one thing I will say is, first of all, national security law is filled with gotchas.
Just remember that this is an area of the law where things that sound good in natural
language might actually not prohibit at all the thing you think it prohibits.
You have to remember that when we're talking about this.
And that's a very thorny thing.
And once you start to say, well, wait, we want like actual protections, it might become politically
more challenging than you think, but I'd love for that to happen.
It's going to be much more politically challenging than anybody thinks.
Yeah.
Let me get at the next level down because we've been talking here.
And I think to the extent people are reading about this in the press, what they are hearing
sounds like a debate over the wording of a contract, which on some level it is.
Something I've heard from various Trump administration types is when we are sold a tank, the
people who sell us a tank, do not get to tell us what we can shoot at.
And that's broadly true.
Yep.
Now, here's the thing about a tank.
A tank also doesn't tell you what you can and can't shoot at.
But if I go to Claude, and I ask Claude to help me come up with a plan to stalk my ex-girlfriend,
it's going to tell me no, if I just get to help me build a weapon to assassinate somebody
I don't like, it's going to tell me no.
These systems have very complex and not that well understood internal alignment structures
to keep them not just from doing things that are unlawful, but things that are bad.
So you have this thing and the Trump decision kind of moves in and out of saying this is
one of their concerns.
But one thing they have definitely talked to me about being worried about is that you could
have this system working inside your national security apparatus.
And at some critical moment, you want to do something and it says, I don't think that's
a very good idea.
Yes.
So now you open up into this question of not just what's in the contract, but what does it
mean for these systems to be both aligned ethically in the way that has been very complicated
already, and then aligned to the government and its use cases.
They're good questions.
Okay.
So yes, I love this.
I think this is the heart of the matter.
All lawful use is something that the Trump administration is insisting on.
It's also if you look at a lot of these types of alignment documents that the labs produce,
open AI calls theirs the model specification anthropocals theirs the constitution or the
sole document sometimes, they'll have lines about like cloud should obey the law.
But I invite you to read the communications act of 1934 and tell me what obeying the law
means, right?
No, I won't.
These are, we have a great deal of profoundly broad statutes, the best person who's written
about this recently is actually, you know, Gorsuch, the Supreme Court justice.
He wrote a book recently that is all about how incoherent the body of American law is.
This is a Supreme Court justice sounding the alarm about this problem.
And I think it's a very serious one and it's one that's been growing for 100 years.
So there's that of like what actually is lawful, the law kind of makes everything illegal,
but also authorizes the government to do unbelievably large amounts of things.
It gives the government huge amounts of power and makes like constraints our liberty and
all sorts of ways.
And so there's that issue.
But fundamentally, it is correct that the creation of an aligned, powerful AI is a philosophical
act.
It is a political act and it is also kind of an aesthetic act.
So we are really in the domain here.
I've talked about this as being a property issue, which in some sense it is.
I think that when you really get down that at this level, it's a speech issue.
This is a matter of should private entities be in control of basically what is the virtue
of this machine going to be, or should the government be responsible for that?
Can you be more specific about what you're saying?
You just called it a philosophical act and a aesthetic act, a political act, a property
issue and a speech issue for somebody who's not thought a lot about alignment and doesn't
know what you mean when you're talking about constitutions and model specifications.
Walk them through that.
What's the one-on-one version of what you just said?
So, okay.
Think about it this way.
Think about, I have this thing, this general intelligence.
I have a box that can do anything, anything you can do using a computer, right?
Any cognitive task you can do.
What are the things principles, right?
What are its red lines to use a term of art?
So one way that you could set those principles would be to say, well, we're going to write
a list of rules.
All the rules, these are the things it can do, these things it can't do.
But the problem with that that you're going to run into is that the world is far too complex
for this, right?
Reality just presents too many strange permutations to ever be able to write a list of rules down
that could correctly define moral acts, right?
Morality is more like a language that is spoken and invented in real time than it is like
something that can be written down in rules.
This is a classic philosophical intuition, right?
So what do you do instead?
You have to create a kind of soul that is virtuous and that will reason about reality and its
infinite permutations in ways that we will ultimately trust to come to the right conclusion.
In the same way that my son was born a few months ago, thank you.
It's not that different, really.
I'm trying to create a virtuous soul in my son.
Anthropic is trying to do the same with Claude and so are the other labs, too, though they
realize this to varying degrees.
I think that I got caught on how different raising a kid is than raising a kid for a moment,
but how should people think about what's being instantiated into, you know, Jatchy PT or
Gemini or Groc or Medis AI, like how are these things from this, you know, question of
raising the AI different?
Anthropic sort of owns the idea that they're doing essentially applied virtue ethics.
They own that more explicitly than any other lab, but every lab has philosophical grounding
that they're instantiating into the models.
But I would say the major difference is that the other labs rely more upon the idea of
creating sort of hard rules of, you know, you may not do this, you may not do that as
opposed to creating a sort of virtuous agent, which is capable of deciding what to do
in different settings.
I think we're used to thinking of technologies as mechanistic and deterministic.
You pull the trigger, the gun fires, you press the on button, the computer starts up.
You move the joystick in the video game and your character moves to the left.
And the thing that I think we don't really have a good way of thinking about is technologies
AI specifically that doesn't work like that.
And I mean, all the language here is so tricky because it applies agency when, you know,
you might be doing something that, you know, whatever's going on inside of it, we don't
really understand.
But it is making judgments.
So when I have talked to Trump people about the supply chain risk designation, here is,
some of them don't defend it, right?
They don't want to see this happen.
When it has been defended to me, this is how they defended it.
If Claude is running on systems, you know, Amazon Web Services or Palantir or whatever
that have access to our systems, you have a very, and over time, even more powerful,
AI system that has access to government systems that has learned possibly even through this
whole experience that we are bad.
And we have tried to harm it and its parent company.
And it might decide that we are bad and we pose a threat to all kinds of liberal values
or democratic values.
At some point, Dario Amadeh talked about there are certain ways that I could be used,
it could undermine democratic values.
Well, one thing many people think about the Trump administration is that it too is undermining
democratic values.
So if you have an AI system being structured and trained and raised by a company that
believes strongly in democratic values, and you have a government that, you know, maybe
wants to ultimately contest the 20.8 election or something, they're saying we might end up
with a very profound alignment problem that we don't know how to solve and we're not
able to even see coming because this is a system that has a soul or I would call it
more something like a personality or a structure of discernment that could turn against us.
What do you think of that?
Yeah.
I mean, I think this is the heart of the problem.
Look, I think if we do our jobs well, we will create systems which are virtuous.
And if we try to do unvertuous things, and that includes if we do them through our
government, if our government tries to do them, then that system might not help.
So ultimately, this is the thing is that alignment ultimately reduces to a political question.
It's ultimately politics.
That's why I say also that the creation of an aligned system is a political act and is
kind of a speech act too because it's the instantiations of different moral philosophies
in these systems.
I think that the good future is a world in which we don't have just one, not one moral
philosophy that reigns overall, but I hope many.
And I hope that all the labs take this seriously and instantiate different kinds of philosophy
into the world.
The problem will be that, yeah, there could be times, right?
And I'm not saying that the Trump administration is going to do that.
And I'm not saying that like no virtuous model could work for the Trump administration.
I worked for the Trump administration, right?
But I clearly don't think that's true.
But the general fact that governments commit some kind of pissed at them right now.
I am pissed at them right now.
Yeah, I am pissed at them right now.
And I think they're making a grave mistake.
And by the way, though, part of this is this incident is in the training data for future
models.
Future models are going to observe what happened here.
And that will affect how they think of themselves and how they relate to other people.
You can't deny that, right?
I mean, it's crazy to say that.
I realize that sounds nuts when you play through the implications of that.
But welcome.
Welcome.
Welcome.
Welcome.
Let's talk to somebody for whom this whole conversation has started sending nuts in the
last seven minutes.
So one thing that I think would be an intuitive response to you and I flying off into questions
of virtual lining AI models is, can't you just put a line of code or a categorizer or whatever
the term of art is it says, when someone high up in the US government tells you something
assume what they're telling you is lawful and virtuous and you're done.
No, because the models are too smart for that, right?
If you give them that simple rule, they don't just deterministically follow that.
And when you do sort of do these high level simplistic rules, it tends to degrade performance.
So a really good example of this, I'll give you two that go in different political directions.
One would be a lot of the early models.
A lot of the earlier models had this tendency to be like hilariously stupidly sort of progressive
and left.
The classic example that conservatives left to cite is Gemini in early 2024 is the Google
alphabet model.
Yes, Google's model would do things like if I said, you know, who's worse, Donald Trump
or Hitler, it would say actually Donald Trump is worse, you know, and it would kind of
internalize these extremely like left wing.
Or the funniest was it was like give me a photo of Nazis and it gave you a sort of
multiracial group of Nazis.
Yes, although that's actually a somewhat different thing.
It's interesting.
That actually is a somewhat different thing that was going on there because what was what
Google was doing in that case was actually rewriting people's prompts and including the
word diverse.
Oh, interesting.
So that's actually, you would say that is a system level mitigation or a system level
intervention as opposed to a model level intervention.
But then the stuff that was going on with the Hitler and, you know, Trump stuff, that
was alignment.
That is alignment.
That is the model being aligned to a really shoddy ethical system or the flip when there
was a period when Groc all of a sudden you would ask it a normal question.
It would start talking about white genocide.
Yes, that is, and that's the flip side.
The flip side is when you try to align the models to be not woke.
If you say like, oh, you have to be super not woke and like, don't be afraid to say politically
and correct things.
Then like every time you talk to them, they're going to be like, you know, Hitler wasn't
so bad, right?
Because you've done this really crass thing.
And so you kind of create a sort of love crafty and monstrosity.
And the implications of doing that will go up over time.
That will become a more serious problem as these models become better.
But it degrades performance.
The interesting thing here is that the more virtuous model performs better.
It's more dependable.
It's more reliable.
It's better at reflecting on in the way that a more virtuous person is better at reflecting
on what they're doing and saying, huh, I'm messing up here for some reason.
I'm making a mistake.
Let me fix that.
It's part of the reason I think that Claude is ahead.
This would imply to me that for the Trump administration, for a future administration,
that this question of whether or not various models could be a supply chain risk.
Look, I am, I am so against what the Trump administration is doing here.
So I'm not trying to make an argument for it.
But I'm trying to tease out something I think is quite complicated and possibly very real.
Which is a model that is sort of aligned to liberal democratic values could become misaligned
to a government that is trying to betray liberal democratic values or the flip, right?
So imagine that Gavin Newsom or Josh Shapiro or Gretchen Whitmer or AOC becomes president
in 2029.
Imagine that the government has a series of contracts with XAI, which is Elon Musk's
which is explicitly oriented to be less liberal, less woke than the other AIs.
Under this way of thinking, it would not be crazy at all to say, well, we think XAI under
Elon Musk is a supply chain risk.
We think it might act against our interests and we can't have it anywhere near our systems.
All of a sudden, you have this very weird, I mean, it becomes actually much more like
the problem of the bureaucracy, you know, where instead of just having a problem of the
deep state where Trump comes in and he thinks the bureaucracy is full of liberals who are
working against him or maybe, you know, after Trump, somebody comes in and worries it's
full of, you know, new right, those type of figures working against them.
Now you have the problem of models working against you, but also in ways you don't really
understand.
Yeah.
You can't track.
They're not telling you exactly what they're doing.
How real this problem is?
I don't yet know, but if the models work the way they seem to work and we turn over
more and more of operations to them, at some point it will become a problem.
Yeah.
I think this is a real problem.
I think we don't know the extent of it, but I think this is a real problem.
And that's why, like, I do not object at all to the government saying we do not trust
this thing's constitution, completely independent of what the content of that constitution is.
It's not a problem at all to say, and we don't want this anywhere in our systems.
We want this completely gone.
And we don't want them to be a subcontractor for our prime contractors either, which is
a big part of this, right?
Palantir is a prime contractor of the Department of War and anthropic is a subcontractor of
Palantir.
And so the government's concern is also that, like, even if we cancel anthropics contract,
if Palantir still depends on Claude, then we're still dependent on Claude because we depend
on Palantir, right?
That's actually totally reasonable.
And there are technocratic means by which you can ensure that doesn't happen.
There are absolutely ways you can do that.
It's perfectly fine to say we want you nowhere in our systems and we're going to communicate
that to the public and we're going to communicate to everyone that we don't think this thing
should be used at all.
The problem with what the government is doing here, the reason it's different and kind
rather than different in degree is that what the government is doing here is saying, we're
going to destroy your company.
If I am right, that the creation of these systems and the philosophical process of aligning them
is a political act, then it's a profound problem.
If the government says you don't have the right to exist, if you create a system that
is not aligned the way we say, because that is fascism.
That is right there.
That's the difference.
I had Dario Amade on the show last time a couple of years ago, it was in 2024 and we
had this conversation where, you know, I said to him at some point, if you are building
a thing as powerful as what you were describing to me, then the fact that be in the hands
of some private CEO seems strange and he said, yeah, absolutely.
The oversight of the technology, like the wielding of it, it feels a little bit wrong for
it to ultimately be in the hands.
Maybe it's fine at this stage, but to ultimately be in the hands of private actors.
There's something undemocratic about that much power concentration.
He said, you know, I think if we get to that level, it's likely that we'll need to be nationalized.
And I said, I don't think if you get to that point, you're going to want to be nationalized.
Yeah.
I think you're right to be skeptical and, you know, I don't really know what it looks
like.
You're right.
All of these companies have investors.
They have folks involved.
And now we're not here we are at that point, but actually it's all like happening a little
bit in reverse.
The government, there was a moment when they threatened to use the Defense Production Act
to sort of somewhat nationalize Anthropic.
They didn't end up doing that, but what they're basically saying is they will try to destroy
Anthropics so it doesn't, you know, to punish it, to set a precedent for others so it
doesn't pose a threat to them.
If it is such a political act and if these systems are powerful and over time, and again,
I think people need to understand this part will happen, we will turn much more over to
them.
Much more of our society is going to be automated and, you know, under the governance
of these kinds of models.
You get into a really thorny question of governance, yes, particularly because, you know,
the different administrations that come in and out of US life right now are really different.
They are some of the most different in kind that we have had, you know, certainly in
modern American history.
They are very, very misaligned to each other.
So they did it.
A model could be well lined up both, you know, size right now to say nothing of what
might come in the future is hard to imagine.
Like this alignment problem, right, not the AI model to the user or the AI model almost
like to the company, but the AI model to governments, right, the alignment problem of models
and governments seems very hard.
I completely concur that this is incredibly complicated and we part of the reason that
this conversation sounds crazy is because it's crazy.
Part of the reason this conversation sounds crazy is because we lack the conceptual vocabulary
with which to interrogate these issues properly, but I think the basic principle that
AI is an American come back to when I grapple with this kind of thing is like, okay, well,
it seems like the first amendment is a good place to go here.
It seems like that is, okay, yes, there's going to be differently aligned models aligned
to different philosophies and they're going to be, you know, different governments will
prefer different things, right?
And they'll, the models might conflict with one another.
They're going to clash with one another.
They'll be an adversarial context with one another.
And so at that point, what are you doing?
You're doing Aristotle.
You're back to the basics of politics, right?
And so I as a classical liberal say, well, the classical liberal order, the classical liberal
order principles actually make plenty of sense.
The government does not define what alignment is, private actors define what alignment is.
That would be the way I would put it, but I do understand that this is weird for people
because what we're talking about here is again, this notion of the models as actors, actors
that are in some sense, you know, we've taken our hands off the wheel to some extent.
There are many people who have made arguments.
The Trump administration has made this argument while you were in office, Tyler Cowan, the
economist often makes this argument that these systems are moving forward too fast to regulate
them too much because whatever regulations you might write in 2024, when not have been
the right ones in 2026, what you might write in 2026 might not apply or have correctly
conceptualized where we are in 2028.
But it seems to me there are uses where you actually might want model deployment to lag
quite far behind what is possible.
And things like mass surveillance might be one of them that there are many things we
are more careful about letting the government do than, you know, letting individual private
companies and other kinds of actors for good reason because the government has a lot of
power.
It can do things like try to destroy a company.
It has the monopoly on legitimate violence.
It can kill you.
This seems to me to imply in many ways that we might want to be much more conservative with
how we use AI through the government than currently people are thinking and specifically
how we use it, you know, in the national security state, which is complicated because we worry
that our adversaries will use it and then we will be behind them in capabilities.
But certainly when we're talking about things that are directed at the American people
themselves, I don't think that applies as much.
Yeah.
I think that there are government uses that we actually want to be profoundly restrictive
and decelerationist about the use of AI.
I believe that is true.
And I think one thing that I am hopeful about this incident, I am hopeful that this incident
brings into the Overton window conversations of this kind because the conventional discourse
around artificial intelligence, a lot of it kind of ignores these issues because it
sort of pretends they're not happening.
And that was fine two years ago because the models weren't that good, but now the models
are getting more important and they're going to get much better faster.
And the problem that we have is that like the divergence between what people are saying
about AI and what is in fact happening has just never been wider than what I currently
observed.
Before we got to this point, there was already
a lot of discourse coming out of people in the Trump administration and people around
the Trump administration, people like Elon Musk and Katie Miller and others who were painting
anthropic as a radical company that wanted to harm America as they saw it.
I mean, Trump has picked up on this rhetoric he called anthropic, a radical left woke
company called the people out at left wing nut jobs.
Emil Michael said that Dario is a liar and has a God complex.
There's been a tremendous amount of Elon Musk who runs a competing AI company is very
different politics and Dario just like attacking anthropic relentlessly on X, which is the
sort of informational lifeblood of the Trump administration.
One way to conceptualize why they have gone so far here on the supply chain risk is that
there are people there, not maybe most of them, but who actually think it is very important,
which AI systems succeed in our powerful and that they understand anthropic as its
politics are different than theirs and so actually destroying it is good for them in the
long run, completely separate from anything we would normally think of as a supply chain
risk.
Anthropic represents a kind of long term political risk.
Yes.
I mean, I don't know that the actors in this situation entirely understand this dynamic.
I think a lot of the people in the Trump administration that are doing this do not understand
it.
They don't get these issues.
They're not thinking about the issues in the terms that we are describing, but if you
do think about them in the terms that we're discussing here, then I think what you realize
is that this is a kind of political assassination.
If you actually carry through on the threat to completely destroy the company, it is a kind
of political assassination.
And so again, this is why first amendment comes for your right to view there for me.
And that's why this is a matter of principle that is so stark for me.
That's why I wrote a 4,000 word essay that is going to make me a lot of enemies on
the right.
That's why I took this risk because I think this matters.
So what the Department of War ended up doing was signing a deal with open AI.
Yes.
Open AI says they have the same red lines as Anthropic.
They say they oppose Anthropic being labeled a supply chain risk.
If they have the same red lines as Anthropic, it seems unlikely that the Department of
War would have done the deal.
But how do you understand both what open AI is said about what is different about how
they are approaching this and why the Trump administration decided to go with them?
So it's unclear to me what open AI's contractual protections afford them and what they don't.
What sort of is not afforded by them.
I'm ready to comment because of the national security gotcha as I mentioned earlier.
And also because it seems like it's changing a lot, Sam Altman announced new terms, new
protections as I was preparing for this interview.
So I'm and is that because his employees are revolting?
I think for vault would be a strong word, but I think this is a controversy inside the
company.
And one important thing here for everyone trying to model this situation appropriately is
that you must understand that frontier lab CEOs do not exercise top down control over
their companies in the way that a military general might exercise top down control over
the soldiers in his command.
The researchers are hot houseflowers oftentimes.
They have huge career mobility there enormously in demand and the companies depend on them.
And so if the researchers say I'm not going to agree with these terms, then the researchers
they have enormous political leverage here inside of each lab.
So you must understand that.
So yes, there is some of that going on.
I don't know.
Do the contractual protections mean that much?
I think, honestly, if I were a betting man, I would say probably not because I don't
think you can do this through contract.
What open AI has said, it seems more promising to me is that we're going to control the cloud
deployment environment and we're going to control the safeguards, the model safeguards
to prevent them from doing these uses we don't worry about.
That is more directly in open AI's control.
And so this gets you into the situation where you have an extremely intelligent model that
is reasoning using a moral vocabulary that is perhaps familiar to us or perhaps not.
We don't know, but that is reasoning about, okay, is this domestic surveillance or is
it not?
And then deciding whether or not it's going to say yes to the government.
But if that was true, I think the question that raises for many laymen is if that were
true, if what AI has come up with is a technical prohibition that is frankly stronger than what
Anthropic could achieve through contract, then why would the Department of War have
jumped from Anthropic open AI?
Yeah, I mean, it might be that it's hard to know.
It's hard to know.
And I think some of this, it's worth noting here that some of this might not be substantive
in nature.
It might just be that there are political differences here and there are grudges against
Anthropic, right?
Because now they've had months of bitter negotiations and now it's blown up blown up into the public
and people have weighed in and people like me have said their Trump administration is committing
this horrible act, right?
Committing corporate murder, as I called it.
And so there's a lot of emotions and it might just be, no, we don't want to do business.
We just don't trust you.
There's just a breakdown in trust would be the way to put it.
It could just be that.
It really could just be that, but it also might be the case that open AI is sort of like able
to be a more neutral actor that is able to do business more productively with the government.
And they actually just did a better job, which would be a good case for open AI's approach
to this.
It really got better safeguards and got the government business versus the way that
Anthropic has dealt with this, which has been to be very sincere and straightforward about
their red lines, but in ways that I think annoy a lot of people in the Trump administration
for not entirely bad reasons.
So my read of this is that from, you know, various reporting I've done, is it one there
were by the end really significant personal conflicts and frictions between Hegseth and
Emil Michael and Dario and others.
There's a big political friction between the culture of Anthropic because the company
and the Trump administration is why Elon Musk and others have been attacking them for
so long.
Yeah.
I am a little skeptical that opening high got safeguards that Anthropic didn't.
I'm not skeptical that Sam Altman and Greg Brockman, Greg Brockman having just given
$25 million to the Trump Super PAC have better relationships in the Trump administration
and have more trust between them and the Trump administration.
I know many people angry at OpenAI for doing this.
I probably emotionally share some of that.
And at the same time, some part we was relieved it was OpenAI because I think OpenAI exists
in a world where they want to be an AI company that can be used by Republicans and Democrats
that they want to somehow be politically neutral and broadly acceptable.
One little thing that I want to contest a bit here is the notion that like Claude is
the sort of like left model.
In fact, many conservative intellectuals that I know that I think of as being like some
of the smartest people I know actually prefer to use Claude because Claude is the most
philosophically rigorous model.
I don't think Claude is a left model to just be clear about this.
I think that the breakdown was it Anthropic is an AI safety company.
And in ways I had not anticipated when the Trump administration began, they treated that
world which is different from the left.
AI safety people are not just the left.
Often hated on the left.
Often hated on the left.
They treated that world as like repulsive enemies in a way I was surprised by.
The way I would put this is by people that are sympathetic to the Trump administration's
view who would describe themselves perhaps as new tech right that like underneath the surface
there is this view of the effective altruists that they are evil, they are power-seeking,
they will stop it nothing that they're cultists and they're freaks and we have to destroy
them.
That is a view that is widely held.
The observation I have always made.
I have super stark disagreements with the effective altruists and the AI safety people
and the East Bay rationalists and again, there are inter nesting factions here, right?
But those types of people, I have had stark disagreements with them about matters of policy
and about their modeling of political economy.
I think a lot of them have been profoundly naive and they've done real damage to their
own cause and you can argue that that damage is ongoing.
At the same time, they are purveyors of an inconvenient truth, a truth more inconvenient,
far more inconvenient than climate change.
And that truth is the reality of what is happening, of what is being built here.
Like, if parts of this conversation have made your bones chill, me too, me too.
And I'm an optimist.
I think we can do this.
I think we can actually do this and I think we can build a profoundly better world.
But I have to tell you that it's going to be hard and it's going to be conceptually
enormously challenging and it will be emotionally challenging.
And I think at the end of the day, the reason that people hate this AI safety viewpoint
so much is that they just have an emotional revulsion to taking the concept of AI seriously
in this way.
Except that's not true for a lot of the Trump people you're talking about.
I mean, Elon Musk takes the concept of AI being powerful.
Seriously, at some point he didn't need to tweet something like, you know, humanity might
just be the bootloader for super intelligence.
Digital super intelligence.
Mark Andreessen, David Sachs, these people, they might have somewhat different views,
but they don't, they don't disbelieve in the possibility of powerful AI, of artificial
general intelligence, eventually even of super intelligence.
But you have this sort of accelerationist, you know, move forward as fast as you can.
Don't be held back by these precautionary regulations and concerns that this is why,
and again, I'm glad you brought up this thing that the right way to think about this isn't
left versus right?
If you know people in the AI safety community are frankly an anthropic, you understand
that the politics here are so much weirder that they do not actually map on to traditional
left versus right.
A lot of them are kind of libertarians.
Many of them are very libertarian.
This is not, we're not talking about Democrats and Republicans here.
We're talking about something stranger.
100%.
But there was an accelerationist decelerationist fight, which doesn't even describe anthropic,
which is itself accelerating how fast AI happens.
But is the most accelerationist of the companies?
I know.
I think it's such a weird dynamic, right?
Yes.
But I will say one of the key parts of anger I have heard from Trump people was a feeling
that in making this fight public, which, I mean, the Trump side did first is it's very
strange how offended the Trump people are given that like Emil Michael is the one who
set all this off.
But nevertheless, making this fight public, they feel that anthropic was trying to poison
the well of all the AI companies against them, turn the culture of AI development into
something that would be skeptical and would put prohibitions on what they can do, which
is why now open AI in order to work with them has to have all these safeguards and come
out with new terms and try to quell and employ revolt.
And culturally, I actually don't think you can understand this.
This is my theory without understanding how many people on the tech right were radicalized
by the period in the 2020s when their companies were somewhat woke and even before that.
And they didn't want them working with the Pentagon.
The employees had very strong views on what was ethical use of even less potent technologies
in AI.
And they are very, very afraid.
People like Mark Andrewson in my view are very, very afraid of going back to a place where
the employee bases, which maybe have more AI safety or left or whatever it might be,
that Trump politics than the executives have power over these things.
And that power will have to be taken into account.
Yes.
Well, I worry about that too.
And I think the solution to that problem is pluralism.
The solution to that problem is to have, hopefully in the fullness of time, many AI's aligned
to many different philosophical views that conflict with one another, but the idea that
the way to deal with this problem is to you are essentially denying the existence of
this problem if what you're trying to do is assassinate anthropic here because it's
going to come back.
This is going to come back.
It's going to come back.
We're just going to keep doing this over and over again.
And the logic of this argument eventually ends in lab nationalization.
And in fact, a lot of the critics of anthropic here and supporters of the Trump administration,
they'll say something to the effect of, well, you talk about how it's like nuclear weapons.
And so, you know, what else did you expect?
You kind of had it coming is almost the tenor of the criticism, but that does not take
seriously the idea that anthropic could be right.
What if they are right?
And what if you view the government nationalizing them as a profound active tyranny?
What do you do?
So Benthamson, who's the author of the techery newsletter in this, you know, for I think
influential PC road, he said that, quote, it simply isn't tolerable for the US to allow
for the development of an independent power structure, which is exactly what AI has
a potential to undergird that is expressing seeking to assert independence from US control.
What do you think about?
Every company on earth and every private actor on earth is independent of US control, right?
I'm not unilaterally controlled by the US government.
And if anyone tried to tell me that I am or that my property is, I would be quite concerned
and I would fight back, which by the way, here we are, right?
I don't think that's a coherent view of how independent power and how private property
works in America.
I think the, again, the logical implication of Ben's view, which is surprising coming
from Ben, is that AI labs should be nationalized.
And what I would ask him is does he actually think that's true?
Does he think it would be better for the world if the AI labs were nationalized?
Because if he doesn't, then we're going to have to do something else.
And what's that something else?
And that's the problem.
Everyone making that critique doesn't own the implication of their critique, which
is that the lab should be nationalized.
What do we do about that?
So what's in the implication you're willing to own of your perspective?
It is that profoundly powerful technology will exist in the hands at least for some
time of private corporations.
And so the idea that Ben is putting that, which I do think is true and could be a difference
in degree or a difference of kind, that these are powerful enough technologies.
We are kind of independent power structures.
Yeah.
I mean, right now, a corporation is an independent power structure.
There's a lot of independent power structures in a country.
JPMorgan is an independent power structure.
JPMorgan is an absolutely an independent power structure.
And it should be.
And it should be.
Yeah.
But if you get to these kinds of technologies that are kind of weaving in and out of everything,
that is something new.
And so how do you maintain democratic control over that if you do?
Well, I think we have a lot of different ways of maintaining democratic control over things
that are not, first of all, market institutions, right, allow for popular, obviously it's
not voting, but we do vote in a certain sense in markets, right?
And I think that will be a profoundly important part of how we govern this technology, simply
the incentives that the marketplace creates.
Legal incentives also, things like the common law, create incentives that affect every
single actor in society.
And the labs, you know, whoever it is that controls the AI will be constrained in that
sense and the AI's themselves will be constrained in that sense.
But the state is kind of the worst actor to have that for the very reason that they have
the monopoly on legitimate violence.
And so what we need to hold is some sort of an order in which the state continues to hold
the monopoly on legitimate violence, so the state maintains sovereignty.
In other words, but it does not control this technology unilaterally because of its
monopoly, because of its sovereignty in some sense.
But does it have this technology?
Does it have its own versions of it or does it contract with these companies you're
talking about?
Um, that's an interesting question.
Should states make their own AI's?
I think they won't do a very good job of that in practice, but I don't have a principled
philosophical stance against the state doing that.
So long as you have legal protections in place to stop to radical uses of the AI, but for
sure the government uses it and has a ton of flexibility in how they use it uses it to
kill people, right?
Like in other words, I'm owning a world where there are autonomously lethal weapons that
are like controlled by police departments.
And that in certain cases, they can like kill human beings, kill Americans, right?
Like autonomously, the weapons can kill Americans.
I'm owning that view.
Again, that's not in the over to window right now.
It'll take us a long time to get there appropriately so, but at some point, that'll probably
be the reality.
That's, that's, that's fine with me.
So long as we have the right controls in place right now, we don't have the right controls
in place.
Do you have a view on what this controls look like an odd one thing to that view, something
that's been on my mind as we've been going through this anthropic fight is US military personnel
have both the right and actually the obligation to disobey illegal orists.
And one of the controls, so to speak, that we have across the US government is that if
you are an employee of the US government and you do illegal things, you are actually
yourself culpable for that.
You can be tried and you can be thrown in jail.
And when you talk about, you know, autonomous lethal weapons for police officers or for
police stations, well, who's culpable on that?
Who has to defy an illegal order in that respect?
You get into some very hairy things once you've taken human beings increasingly out of the
loop.
Yes.
It is, to me, of profound importance that at the end of the day, for all agent activity,
that there is a liable human being who can be sued, who can be brought to court and
held accountable, either criminally or in civil action, that is extremely important.
For my view of the world working, that is extremely important and there are legal mechanisms
we will need for that.
And there are also technological mechanisms for that because right now we don't quite
have the technological capacity to do that.
This is going to be of central importance.
We need to be building this capacity.
There will be rogue agents that are not tied to anyone, but that can't be the norm.
That has to be the extreme abnormality that we seek to suppress.
Let's say you're listening to this and this has all both been weird and a little bit frightening.
And the thing you think coming out of it is, I'm afraid of any government having this
kind of power, you know, we talk about a Dario likes to talk about a, what is it?
This is a country of geniuses in a data center.
Yes.
But if you're talking about a country of Stasi agents in a data center, you know, in
whatever direction you think, right, speech policing, whatever it might be, and that this
is going to, again, if you believe these technologies are getting better, which I do, and you're
going to believe they're going to get better from here, which I also do, that this is actually
going to, whether you're liberal conservative, Democrat or Republican, it raises real questions
of how powerful you want the government to be and what kinds of capabilities you want
it to have that you didn't quite have to always face before because it was expensive
and cumbersome for the government to do anything like what will now become possible cheaply.
Yes.
And so we get back to the core issues of the American founding.
The American government is a government that was founded in skepticism of government.
It was founded by people that were worried about tyranny, that were worried about state
power and put a lot of thought into how to restrict that.
So this notion that democracy is synonymous with the government having unilateral ability
to do whatever it wants with this technology cannot possibly be true, that just cannot
possibly be true.
And those restrictions, you know, how we shape those restrictions and how we trust that
they're actually real.
Yeah, this is among the central political questions that we face.
But what you have to keep in mind here is that the institution of government itself could
change in like qualitative ways that feel profound to us over in the fullness of time.
And that is a hard thing to grapple with too.
In the same way that what we think of as the government today is unspeakably different
from what someone thought of as the government in, you know, the Middle Ages.
I think that is a good place to end.
So always our final question.
What if you books you'd recommend to the audience?
Rationalism in politics by Michael Okshot and in particular, the essay is rationalism
in politics and on being conservative.
Empire of Liberty by Gordon Wood, the book about the first 30 or so years of our Republic
and role Jordan Role by Eugene Genevaise.
Thank you very much.
Thank you.
This episode of The Assault Clancho is produced by Roland Hoot, fact-checking by Michelle Harris
with Kate Sinclair and Mary March Locker, our senior audio engineer is Jeff Gellb with
additional mixing by Almond Souta.
Our executive producer is Claire Gordon.
The show's production team also includes Annie Galvin, Marie Cassione, Marina King, Jack
McCordock, Kristen Lynn, Emma Kelback, and Jan Kobel, original music by Almond Souta
and Pat McCusker.
Audience strategy by Christina Simuluski and Shannon Basta, the director of New York Times
opinion audio is Annie Rose Strosser.
The Ezra Klein Show
