Loading...
Loading...

Read the full essay here: https://www.dwarkesh.com/p/dow-anthropic
Timestamps
(00:00:00) - Anthropic vs The Pentagon
(00:04:16) - The overhangs of tyranny
(00:05:54) - AI structurally favors mass surveillance
(00:08:25) - Alignment...to whom?
(00:13:55) - Coordination not worth the costs
So by now, I'm sure that you heard that the Department of War has declared a
drop-back A supply chain risk because
and theropic refuse to remove red lines around the use of their models
for mass surveillance and for autonomous weapons.
Honestly, I think this situation is a warning shot.
Right now, elements are probably not being used in mission-critical ways.
But within 20 years, 99% of the workforce in the military,
in the civilian government, in the private sector,
is going to be AIs.
They're going to be the robot armies that constitute our military.
They're going to be the superhumanly intelligent advisors
that senators and presidents and CEOs have.
They're going to be the police.
You name it, the role will be filled by an AI.
Our future civilization is going to be run on AI labor.
And as much as the government's actions here piss me off,
I'm glad that this episode happened because it gives us the opportunity
to start thinking about some extremely important questions.
Now, obviously, the Department of War has the right to refuse
to use in theropic models.
And in fact, I think they have an entirely reasonable case for doing so,
especially so given the ambiguity of terms like
mass surveillance and autonomous weapons.
In fact, if I was the secretary of war,
I probably would have made the same determination
and refused to use in theropic models.
Imagine if there's some future democratic administration
and Elon Musk is negotiating Starlink access to the military.
And Elon says, look, I reserve the right to cut off the military's access
to Starlink in case you were fighting some unjust war
or some war that Congress is not authorized.
On the face of it, this language seems reasonable.
But as a military, you simply cannot give a private contractor
that you're working with, the kill switch on a technology
that you have come to rely on.
And if that's all the government had done to say we refuse
to do business to the anthropic, that wouldn't find
and I wouldn't have ridden this blog post
and I wouldn't be narrating the shit to you,
but that's not what the government did.
Instead, the government has threatened to destroy
anthropic as a private business
because anthropic refuses to sell to the government
on terms that the government commands.
Now, if I've held the supply chain restriction
would mean that companies like Amazon and Nvidia
and Google and Palantir would need to ensure
that anthropic is not touching any of their Pentagon work.
And anthropic could probably survive this designation today
because these companies can just corden off the services
they're providing to the Department of War.
But given the way AI is going, eventually,
it's not going to be just some party trick addendum
to the products that these companies are serving
to the military.
In the future, AI will be woven into how every product is built
and maintained and operated.
In the future, if Amazon is providing some service
to the Department of War through AWS
and that service is built using Cloud Code,
is that a supply chain risk?
In a world of ubiquitous and powerful AI,
it's actually not clear to me that big tech
will be able to corden off their use of cloud
away from their Pentagon work.
And this raises the question that the Department of War
probably hasn't thought through.
If you do end up in this world with powerful and pervasive AI,
then when forced to choose between the AI provider
and the Department of War, which constitutes
a tiny fraction of the revenue,
wouldn't they rather drop the government than the AI?
So what exactly is the Pentagon's plan here?
Is it to course and threaten and bully
every single company that won't do business
with the government on exactly the terms
that the government demands?
Now remember that the whole background
of this AI conversation is that we aren't a race with China.
But what is the reason that we want to win this race?
It's because we don't want the winner of the AI race
to be a government which believes that there is no such thing
as a truly private citizen or a private company.
And that if the state wants you to pride them
with a service that you find morally objectionable,
you are not allowed to refuse.
And if you do refuse, they will destroy your business.
Are we really racing to beat China
and the CCP in AI just so we can adopt
the most foolish parts of their system?
Now people will say our government is democratically elected.
So it's not the same thing when they tell you what you must do.
But it refused to accept this idea
that if a democratically elected leader
hypothetically tells you to help him do master valence
or violate the rights of your fellow citizens
or to help him punish his political enemies,
then not only is that okay,
but that you have a duty to help him.
Honestly, a big worry I have is that master valence
at least in certain forums is already legal.
It is just an impractical to enforce at least so far.
Under current law, you have no forthcoming protection
against any data that you share with a third party.
That includes your bank, your ISP, your phone carrier,
and your email provider.
The government reserves the right to purchase
and read this data in bulk without a warrant.
What's missing is the ability to actually do anything
with all this data.
No agency has the manpower to monitor every single camera
and read every single message
and cross reference every single transaction.
However, that bottleneck goes away with AI.
There are 100 million CCTV cameras in America
and you can get pretty good open source multimodal models
for 10 cents per million input tokens.
So if you process a frame every 10 seconds
and if each frame is say 1000 tokens,
then for 30 billion dollars,
you can process every single camera in America.
And remember that a given level of AI capability
gets 10x cheaper every single year.
So while this year might cost 30 billion dollars,
next year it'll cost three billion dollars,
the year after that, 300 million dollars.
And by 2030, it'll be less expensive to monitor
every single nook and cranny in this country
than it is to remodel the White House.
Now, once a technical capacity for master valence
and political suppression exists,
the only thing that stands between us
and an authoritarian state is the political expectation
that this is just not something we do here.
And that's why I think anthropic actions here
are so valuable and commendable
because they help set that norm and that precedent.
What we're learning for this episode
is the government has way more leverage
of our private companies than we previously realized.
Even if this supply chain restriction is backtracked,
which as of this recording,
prediction markets give a 74% chance of happening,
the president has so many different ways
of harassing a company which is resisting his will.
The federal government controls permitting
for power generation, which you need for more data centers,
it oversees antitrust enforcement.
The federal government has contracts
with all the other big tech companies
that anthropic relies on for chips and for funding.
And it could make a soft unspoken condition
or maybe even an explicit condition of such contracts
that those companies no longer do business with anthropic.
And people have proposed that the real problem here
is that there's only three leading AI companies.
And so this creates a very clear and narrow target
on which the government's gonna apply leverage
in order to get what they want out of the technology.
But here's what I worry about,
is that if there's wider diffusion,
I don't think that solves a problem either
because from the government's perspective,
that makes the situation even easier.
Say by 2027, the best models that the top companies have,
the cloth six, the Gemini fives,
are capable of enabling mass surveillance.
And even if those companies draw a line in the sand
and say we're not gonna sell it to the government,
by late 2027 or certainly by 2028,
there's gonna be such wide diffusion
that even open source models will be able to match
the performance that the frontier had 12 months prior.
And so in 2028, the government can just say,
look, anthropic and Google and opening eye
are drawing these red lines, that's not an issue.
I'll just do some open source model
that might not be the smartest thing in the world,
but is definitely smart enough to not take a camera feed.
The more fundamental problem here
is that even if the three leading companies
draw a line in the sand
and are even willing to get destroyed
in order to preserve that line,
the technology just structurally and intrinsically favors
to use us like mass surveillance
and control over the population.
And so then the question is, what do we do about?
And honestly, I don't have an answer.
You hope that there's some symmetric property
to this analogy, where in the same way that is helping
the government be able to better monitor
and control this population, it will help us
to citizens better check the government's power.
But realistically, I just don't think
that's how it's gonna work out.
You can think of AI as just giving more leverage
to whatever assets and authority that you already have.
And the government is starting
with the monopoly on violence,
which they can now supercharge with extremely
obedient employees that will never question their orders.
And this gets us to the issue with alignment.
What I just described for you,
an army of extremely obedient employees,
is what it would look like if alignment succeeded,
that is at a technical level,
we got AI systems to follow somebody's intentions.
And the reason it sounds scary
when put in terms of mass surveillance or robot armies,
is that there's a core question at the heart of alignment
that we haven't answered yet.
Because up till now, AI's just have not been smart enough
to make this question relevant.
And the question is, to what or to whom should the AI's be aligned?
In what situation should the AI defer
to the model company versus the end user,
versus the law,
versus to its own sense of morality?
This is maybe the most important question
about what happens in the future with powerful AI systems.
And we barely talk about it.
And it's understandable why?
Because if you're a model company,
you don't really want to be advertising the fact
that you have complete control over the preferences
and the character of the entire future labor force,
not just for the private sector obviously,
but also for the civilian government and for the military.
And we're getting to see with this Department of War
and the topic, SPAT,
an early version of what will be the highest stakes
negotiations in human history.
And made no mistake about it,
mass surveillance is nowhere near the top of the highest stakes
thing that one could do with AGI.
This is just an example that has come up early
in the development of technology
and is giving us a sneak peek
at the power dynamics that we'll be at play.
Now, the military insists that the law already prohibits
mass surveillance.
And so, and the topic should let its models be used
for quote, all lawful purposes end quote.
But of course, as we saw with this note in revelations
in 2013, even for this very specific example
of mass surveillance, the government is very willing
to use secret and deceptive interpretations of the law
to justify its actions.
Remember, what we learned from Snowden was that the NSA,
which by the way, is a part of the Department of War,
was using the 2001 Patriot Act to justify collecting
every single phone record in America
because the argument was that some subset of them
might be relevant for a future investigation.
And they ran this program for years under a secret corridor.
So when the Pentagon today says,
we will never use your models for mass surveillance
because it's already illegal.
So, your red lines are unnecessary.
It would be incredibly naive to take that at face value.
No government is going to call what they are doing mass surveillance.
For them, it will always have a different euphemism.
So, and the topic comes back and says,
no, we don't trust you.
We want the right to draw these red lines
and to refuse your service if we determine
that you're breaking the contract
and you're breaking the terms of service.
But now think about it from the military's perspective.
In the future, every single soldier in the field,
every single bureaucrat and analyst in the Pentagon,
even the generals are going to be AI's.
And on current track, those AI's are going to be provided
by a private company.
I'm guessing that Pete Hegseth is not thinking
about Gen AI in those terms.
But sooner or later, the stakes will become obvious,
just as after 1945, the stakes of nuclear weapons
became obvious to everybody in the world.
And now, a private company insists
that it reserves the right to say to you,
hey, you're breaking the values in the terms of service
that we have embedded in our contract with you.
And so we're cutting you off.
Maybe in the future, Claude will have its own sense
of right or wrong.
And it will be able to say, hey, I'm being used
against my terms of service.
And Ali will just refuse to do what you're saying.
And for the military, that's probably even scarier.
I'll admit that at first glance, letting
the model follow its own values
sounds like the beginning of every single sci-fi dystopia
you've ever heard.
Because at the end of the day, a model following its own values
isn't that literally what a misalignment is.
But I think situations like this illustrate why it's important
that models have their own robust sense of morality.
It should be noted that many of the biggest catastrophes
in history have been avoided because the boots on the ground
simply refused to follow orders.
One night in 1989, the Berlin Wall Falls,
and as a result, the totalitarian East German regime
collapses because the border guards between West and East
Germany refuse to fire on their fellow citizens
who are trying to escape to freedom.
Maybe the best example of this is Stanislav Petrov,
who was a Soviet lieutenant colonel stationed on duty
at a nuclear early warning system.
And his sensors said that the United States had launched
five intercontinental ballistic missiles at the Soviet Union.
But he judged it to be a false alarm,
and so he refused to alert his higher ups and broke
protocol.
If he hadn't, Soviet high command would probably have retaliated
and hundreds of millions of people would have died.
Of course, the problem is that one person's virtue
is another person's misalignment.
Who gets to decide what the moral convictions that these AIs
will have should be?
And in whose service they should break the chain of command
and even the law?
Who gets to write this model constitution
that will determine the character of these powerful entities
that will basically run our civilization in the future?
I like the idea that I'd tolerate
out when I came out of my podcast.
Other companies put out a constitution
and they may kind of look at them, compare outside observers
can critique and say, I like this one,
this thing from this constitution
and this thing from that constitution.
And then kind of that creates some kind of soft incentive
and feedback for all the companies
to like take the best of each elements and improve.
I think it's very dangerous for the government
to be mandating what values these AIs
systems should have.
The AI safety community, I think,
has been quite naive about urging regulations
that would give governments such power.
And I think Anthropics specifically
has been especially naive in urging regulation
and, for example, in opposing the moratorium on state AI laws.
Which is quite ironic because I think what Anthropics
advocating for here would give the government
even more ability to apply this kind
of thuggish political pressure on AI companies.
The underlying logic for why Anthropics
wants these regulations makes sense.
Many of the actions that a lab could take
to make AI development safer impose real costs on them
and could slow them down relative to their competitors.
For example, investing more in aligning AI systems
rather than just on rocket abilities,
enforcing safeguards against using these models
to make bio-weapons or do cyber attacks.
And eventually slowing down the recursive self-improvement
loop where AI's are helping design
more powerful future systems to a pace
where humans can actually stay in the loop rather than just
kicking off some kind of uncontrolled singularity.
And these safeguards are meaningless
unless the whole industry follows suit,
which means that there's a real collective action
problem here.
Anthropics has been open about their opinion
that they think some sort of extensive and involved regulatory
apparatus has needed to control AI.
They wrote in their frontier safety roadmap, quote,
at the most advanced capability levels and risks,
the appropriate governance analogy may be closer to nuclear energy
or financial regulation than to today's approach to software.
So they're imagining something that looks closer
to the Nuclear Regulatory Commission
or the Securities and Exchange Commission, but for AI.
Now, I cannot imagine how a regulatory framework built
around the kinds of concepts that are used
in the AI risk discourse will not be used and abused
by a wannabe despot.
The underlying terms here, like catastrophic risk
or threats to national security or autonomy risk,
are so vague and so open to interpretation
that you're just handing a fully loaded bazooka
to a future power hungry leader.
These terms can mean whatever the government wants them to mean.
Have you built a model that will tell users
that the government's policy on tariffs is misguided?
Well, that's a deceptive model.
It's a manipulative model.
You can't deploy it.
Have you built a model that will not
assist the government with mass surveillance?
That's a threat to national security.
In fact, any model which refuses order from the government
because it has its own sense of right and wrong,
that's an autonomy risk.
You have a model that's acting independently
of commands from the government.
Look at what the current government is already doing
in abusing statutes that have nothing to do with AI
to coerce AI companies to drop their red lines
around mass surveillance.
Depending on how threatened and therapeutic
with two separate legal instruments,
one is a supply chain risk designation,
which is an authority from a 2018 defense bill
that is meant to help keep Huawei components
out of American military hardware.
And the other is the Defense Production Act,
which is a statute from the 1950s
that was meant to help true men make sure
that the steel mills and ammunition factories
were up and running during the Korean War.
We really want to hand the same government
a purpose built regulatory apparatus for AI
that is to say the very thing
that the government will most want to control.
I know I've repeated myself like 10 times here,
but I want to make this point again,
because it's worth stressing.
AI will be the substrate of our future civilization.
It will be the way you and I as private citizens
will have access to commercial activity,
will have access to information about the outside world
and to advice about how we should use our powers
as voters and capital holders.
Mass surveillance, while it's very scary,
is like the 10th scariest thing that the government could do
with control over the AI systems
with which we will interface with the world.
Now, the strongest argument against everything
I've just argued is this.
I'm really going to have no regulation
on the most powerful technology in the history of humanity.
Even if you thought that was ideal,
there's clearly no way the government
doesn't regulate AI technology in any way whatsoever.
And besides, it is genuinely true
that coordination could help us
to lessen some of the risk from AI.
The problem is I just don't know how to design
a regulatory apparatus,
which isn't just going to be this huge tempting opportunity
for the government to control our future civilization,
which remember when we built on AI
or to requisition blindly obedient soldiers
and sensors and apparatus.
While some kind of regulation might be inevitable,
I think it'd be a terrible idea for the government
to just wholesale take over the technology.
Ben Thompson had a post the last Monday
where he argued, look, people like Dario
have made the analogy of AI to nuclear weapons
in the context of arguments that had a catastrophic risk
in the context of arguing for ex-work controls.
But then think about what that analogy implies
and Ben Thompson writes, quote,
if nuclear weapons were developed by a private company,
the US would absolutely be incentivized
to destroy that company.
And honestly, safety aligned people
have made a similar point.
Liverpool Lashenbrenner, who is a former guest
and full disclosure, a good friend,
wrote in his 2014 memo, Situational Awareness, quote,
I find it an insane proposition
that the US government will let a random SF startup
develop superintelligence.
Imagine if we had developed atomic bombs
by letting Uber just improvise.
And my response to Liverpool's argument at the time
and Ben's argument now is, while they're right,
it's crazy that we're entrusting private companies
with the development of this world of solar
or technology, I just don't think it's an improvement
to give that authority to the government.
Nobody's qualified to be the stewards of superintelligence.
It's a terrifying, unprecedented thing
that our species is doing right now.
The fact that private companies aren't the ideal institutions
to deal with this does not mean that the Pentagon
or the White House is.
Yes, if a single private company were the only entity
capable of building nuclear weapons,
the government would not tolerate it
having a veto power over how those weapons are used.
But I think this is a terrible analogy
for the current situation with AI,
for at least two important reasons.
First, AI is not some self-contained weapon
like a nuclear bomb, which only does one thing.
Rather, it is more like the process of industrialization
itself, which is a general purpose transformation
of the whole economy, with thousands of applications
across every single sector.
If you applied Ben Thompson or Leopold Lachin-Brenner's
logic to the Industrial Revolution,
which is also world historically important,
it would imply the government had the right
to requisition any factory wanted
or destroy any business wanted
and punish and coerce anybody who refused to comply.
But this is just not how free societies
handled the process of industrialization.
And it's also not how they should handle AI.
Now, people will say, well, AI will develop
unprecedentedly powerful superweapons,
superhuman hackers, superhuman bio-weapons researchers,
fully autonomous robot armies.
And we just can't have private companies developing
the technology that will make all this possible.
But you can make the same argument
about the Industrial Revolution
from the perspective of 17th century Europeans,
you've got all kinds of crazy shit in the world today
that is the result of the Industrial Revolution,
chemical weapons, aerial bombardment,
not to mention nuclear weapons themselves.
And the way we dealt with this is not giving the government
absolute control over the Industrial Revolution,
which is to say over modern civilization itself.
Rather, we banned and regulated the specific
weaponizable end use cases.
And we should regulate AI in a similar way,
which is that we should regulate specific destructive
use cases, for example, launching cyber attacks,
things which should be illegal,
even if a human was doing them.
And we should also have laws which regulate
how the government can use this technology.
For example, by building an AI-powered surveillance state,
the second reason that benzennology
to some monopolistic private nuclear weapons developer
breaks down is that if not just one company
that can develop this technology,
there are many other frontier AI labs
that the government could have turned to.
The government's argument that it had to usurp
the private property rights of the specific company
in order to get access to a critical
national security capability is extremely weak.
It could have just instead made a voluntary contract
with one of and Throatbeck's half a dozen other competitors.
If in the future, that stops within the case.
And if only one entity remains capable
of building the robot armies and the superhuman hackers.
And we have reason to worry that with their insurmountable
they could even take over the whole world,
then they agree that would be unacceptable
for that entity to be a private company.
And so honestly, I think my crux against the people
who argue that AI is such a powerful technology
that it cannot be shaped by private hands
is just that I expect this technology
to be very multipolar.
And I expect there to be lots of competitor companies
at each layer of the supply chain.
And unfortunately, this is for this reason
that I don't think that individual acts
of corporate courage solve the problem.
And the problem is this, that structurally AI favors
many authoritarian applications,
mass surveillance being one of them.
Even if Anthropic refused to sell its models
to the government to enable mass surveillance,
and even if the next two companies after Anthropic
did the same, in 12 months, everybody and their mother
will be able to train a model as good as a current frontier.
And at that point, there will be some vendor
who is willing and able to help the government
enforce mass surveillance.
So the only way we can preserve our free society
is if we make laws and norms through our political system
that is unacceptable for the government, to use AI,
to enact mass censorship and surveillance and control.
Just as after World War II, the whole world
said this norm that you were not allowed
to use nuclear weapons to wage war.
Aren't we clear here?
These are extremely confusing and difficult questions
to think about.
And even in the very process of brainstorming this video,
I changed my mind back and forth on them a bunch.
And I reserved the right to change my mind again.
In fact, I think it's essential that we change our mind
as AI progresses and we learn more.
That's the very point of conversation and debate.
Someday, people will look back on this time,
the way we look back on the alignment.
People having these big, important debates,
just as the world is about to undergo
these huge technological and social and political revolutions.
And some of the thinkers even managed
to get a couple of the big questions right
for which we today are still the beneficiaries.
We owe to our future to at least try
to think through the new questions that are raised by AI.
OK, this was a narration of an essay
that I also released on my blog at dworkache.com.
You should sign up there for my newsletter
for future essays like this.
Otherwise, I will see you for the next podcast interview.
Cheers.

Dwarkesh Podcast

Dwarkesh Podcast

Dwarkesh Podcast
