Loading...
Loading...

We explore a tumultuous week in AI as the U.S. government banned Anthropic's Claude AI from military use, only for it to be deployed in the Iranian operation the next day.
We analyze the ethical dilemmas faced by AI firms navigating government demands, spotlighting CEO Dario Amodei's refusal to compromise on safety. The discussion intensifies with OpenAI's bold offer to the Pentagon, igniting a rivalry that questions corporate power in military engagements.
------
🌌 LIMITLESS HQ ⬇️
NEWSLETTER: https://limitlessft.substack.com/
FOLLOW ON X: https://x.com/LimitlessFT
SPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQ
APPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890
RSS FEED: https://limitlessft.substack.com/
------
POLYMARKET | #1 PREDICTION MARKET 🔮
https://bankless.cc/polymarket-podcast
------
TIMESTAMPS
0:09 AI Used as a Weapon
1:19 The Pentagon's Ultimatum
4:45 Dario's Ethical Stand
10:51 OpenAI's Strategic Shift
14:25 Irony of Military Operations
18:00 Public and Private Divide
19:26 The Future of AI and Warfare
------
RESOURCES
Josh: https://x.com/JoshKale
Ejaaz: https://x.com/cryptopunk7213
------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures
On Friday, the USA banned Anthropic from being used in any military operation after Daryl refused to
cave to their demands of being used for mass surveillance and autonomous weapons. Then literally hours
later in the early morning of Saturday, Claude was used to perform and execute the most important
biggest military operation since the invasion of Iraq. This is by far been the most insane week
in AI. There was drama and deceit between the top two AI labs open AI and Anthropic and the
Pentagon wanted uncensored access for use of Claude and open AI's Chatchee BT. AI models are not
just chatbots. At this point, it's a geopolitical weapon being used for warfare. It's amazing how
the biggest news on earth is now just AI news. Not a single large thing happens that doesn't have AI
integral to the decision making process. And this is no difference. And I think the question that
it left everyone at the end of this is like, who really controls AI? Because for the first time what
we're seeing is these private companies have so much leverage, so much power that they're
starting to conflict with the actual elected officials and government. And I think that's kind of
at the core of this discussion. But if you missed anything over the last 72 hours, don't worry.
We're going to get you caught up starting with what happened early last week that sparked this debate
because at this time, we didn't know that there was any war plans happening. No one had any idea
that there were any attacks planned. It was just an AI story. So maybe we'll start with that AI
story. That AI story specifically was the news that revealed that the Pentagon, which is part of
the US Department of War, had been using Claude to orchestrate and execute their capture of the
former president of Venezuela, Maduro. And that shocked everyone because up until that point,
people were just kind of prompting it to to vibe code stuff and to answer their city questions
about what they wanted to cook tonight. So to see this real life example of an AI being used,
not just as a tool outside of a trap, but for something so important as military warfare,
was a big shock and surprise, which then sparked a debate around what the model wanted to be used for.
Now, the head of the Pentagon, Pete Hegseth, issued an ultimatum shortly after, which raised
suspicions around what the conversations were like between the US Department of War and the owners
of Claude and Thropic. And it was all but good. The issue that they were facing was
and Thropic had been asked to give them an uncensored version of Claude, which could be used for
two things, mass surveillance, which included domestic mass surveillance of people within the US,
which was a breach of the Fourth Amendment, and also for use within autonomous weapons,
meaning that there was no humans involved and an air would control how weapons were executed
and fired. Darius comments against that was simply he did not feel comfortable giving Claude access.
He didn't think it was good enough and also that it was a direct breach of law. So Pete Hegseth
issued an ultimatum at a deadline a few days later on the Friday saying, you either agree to our
demands or their consequences. Yeah, and it's really interesting to hear how close they were,
but seemingly unable to reach a deal. It seemed like they had everything down to just what two of
these red lines. And a lot of that conversation happened around whether they are allowed, whether
the US government and the Department of War is able to use these models without the express written
consent and approval of Anthropic when it comes to making kinetic decisions, things that actually
results in harm that's being caused. And there's an interesting interview that I saw or just like
kind of report that said that when asked about what was it the nuclear weapon, like if someone shot
a nuclear weapon at the United States, does the Department of War have, oh, here it is. Yeah,
this is perfect. Does the Department of War have the opportunity and have the right to use Anthropic
and Claude models to determine like what to do about that to help shoot it down. And then the
response from Dario was basically like, well, call us first and then we'll talk through it and we'll
let you know. And I can understand why Dario wants that to be the outcome and I can understand why
the Department of War is absolutely furious because they're like, you're not the elected official,
you are not the military. You don't have the right to sign off on our nuclear plans. But for Dario,
he very much feels like he created this incredibly strong tool. And what is Anthropic known for at
the Corvettes DNA? Well, it's it's safety. It's AI alignment. And I'm sure they want to feel like
they they have a heavy hand. So it doesn't get out of hand. And I think that's ultimately where
this conflict came from is is Anthropic wanting to abide by their safety principles. But the Department
of War and the government and the military really being like, okay, yeah, but we're the military.
And like if someone's attacking us, we need to use all the tools that are disposal. And we can't
be waiting for you to answer the phone to tell us if it's okay or not. Yeah, it's the crux of the
issue comes down to the contractual language. The Pentagon was willing to say, hey, yeah,
you can keep us within all means of legal law. And Dario's response was simply, the legal law
isn't really prepped and covered for the future of AI. Like right now, you could use our model
legally to get access to a bunch of people's data. And you can just get away with that. And
Dario, his own fundamental ethics behind building Anthropic wasn't comfortable with that.
But the US Department of Wars response was simply, hey, this is a matter of national security.
And we can't have a private company, a private unelected official dictate how we
perform national defense, which you can see fair takes on either side of this point. And it's
extremely complicated and nuanced. And in Dario's exact response, there's this very
poignant line that he says, we cannot, in good conscience, acete to their request. This was
an in response to to Pete's ultimatum on that Friday, which led to just only like a crazy
public, I guess debate or fight between these guys. You've got Pete publicly saying, Anthropic
just delivered a masterclass in arrogance and betrayal, as well as the textbook case of how not
to do business with the United States government or the Pentagon. And a bunch of responses were
released after that showing that Dario had not been answering their phone calls or was just being
inflexible. And then Dario on his side was saying, we need these contractual language involved
because otherwise this AI could be used for nefarious purposes. So we're just so so much drama.
In addition to the Secretary of War, having some choice words for Anthropic, Donald Trump chimed
in with a rather angry and loud all caps message saying, the United States of America will never allow
radical elect woke company to dictate how our great military fights and wins wars among other
things. And the public backlash, the public sentiment around Anthropic and Trump kind of shifted
at this moment to being supportive of Anthropic. They were glad that he was standing on its morals
and its values. And as a result, the app store showed that cloud actually became number one in the
world. And a few weeks ago is only 131. And this part of the show is brought to you by our sponsor
and a supporter of the show, Polly Market. And Polly Market is a great way to determine things
like who is going to be the number one app in the app store on March 6. And what's interesting
here is there's a 62% chance that the current leader actually changes hands. It's showing that
chat GPT is going to be the new king on the block. When in reality, there's another market that shows
Anthropic is actually most likely to have the best model by the end of March, which is loosely
the same exact time. And I love how they've used this to kind of gauge what's the best because
now we kind of have an idea that there isn't going to be GPT 6 isn't coming out this month.
But we know for a fact that Anthropic has the winner with Opus 4.6.
I was just looking and wondering why chat GPT might be taking the lead here despite
there being so much positive approval for Claude. And that might have something to do with our
friend Sam Olman at OpenAI who swooped in at the last minute after all the drama between Dario
and the US Department of War with his own proposition, basically saying, hey, you can use
chat GPT instead and we'll agree to your terms. As long as you want to keep things within
lawful use, we're going to draft up our own safety stack and red lines. What do you think about
this? And the agreement was pretty extensive. They put out an open statement. Now, there's a lot
of minutia in details. But the way I see it or my favorite highlights from this is they pretty
much agreed to the simple terms, but there was some slight changes in the form of they agreed
that OpenAI won't be used for mass domestic surveillance, no use of OpenAI technology to direct
autonomous weapon systems. So these are the two things that Dario wanted, but it's all under
lawful use, which is the issue that Dario had. And then there's a third thing, which is no use of
OpenAI technology for high stakes automated decisions. AK, they should always be a human in the loop
and conceivably held accountable for any call of law going forward.
This is crazy. This is the part of the story in which I just kind of lost my mind because
it didn't make any sense. It was like, okay, the Pentagon is saying no to the Pentagon.
Clearly, they can't figure it out. Sam Alman was on CNBC earlier in the day supporting Anthropic.
And then that evening, they signed the deal with the Department of War that is supposedly the
same exact terms because they didn't want a red line. And this was like, oh my god, what do you
mean? Was Sam just manipulating the world that he just slide in and actually steal the deal from
Anthropic? And in a way, he did. It appeared as if the Department of War was trying to call Dario
at the 501 deadline. He didn't answer the phone. They gave him a couple minutes. They picked up
the phone called Sam. And now there's a deal. And to your point, it seems like there is this key
difference. And while a lot of the morals that they were standing on are the same, the key difference
is basically in the responsibility and the lawfulness. Like one is kind of proactive. One is retroactive
where Anthropic wanted the ability to sign off on things. Whereas OpenAI is saying, well,
you're the government. You're the military. You can make these decisions so long as they are lawful
and so long as someone is responsible for kind of claiming responsibility for these decisions.
And we kind of know how that works where I mean, perhaps that is not as foolproof as Anthropics plan,
but it resulted in them getting a what was the $200 million deal and a lot of publicity with
the government. So it was a big win for OpenAI. And the point of the story where I was like, what
is going on here? This is chaos. And mind you, this is just hours before the actual first strikes
were about to start. So there was a lot of things happening in anticipation of this mission.
Yeah, all of this happened within like, I can't emphasize this enough. It happened within like
four to six hours. Oh, this happened. It was just sitting on X scrolling. And I was like, I was
sharing something and I was like, oh my god, wait, like a new thing happened. Then a new thing happened.
Yeah. Friday night was not a night to go out because the internet was at its peak. The truth was
being revealed. I think X had the highest amount of engagement over the weekend. Saturday and Sunday
broke both new records. Great time to monitor the situation. Same. But back to the to the
Sam agreement, they agreed to all lawful use. And the explicit difference there is that they'll
settle all kind of grievances in a court of law. So retroactive, as you just said, which is the
thing that Daria was just completely against. But there's also some other important safety lines
that they put in that I actually think are useful towards addressing this. So one, the models or
chat GPT can only be deployed through the cloud. And the reason why this is a better implementation
versus letting the government run it locally is that you can monitor and you can track what they
do to make sure that they're not doing anything nefarious. Number two, open air has a specific
vetted team of American software engineers that will always work on these models and will update
them. And the best part is the governments are the government is hands off on this entire approach.
And then the third important point is Dario's agreement. His problem was around usage policies.
So he basically wanted to dictate when the Pentagon could could or could not perform let's say
a military strike, whereas it open AI is deal they use usage policies and a software stack that kind
of like helps them navigate through all of these different legal issues. So it's just a much more
detailed and nuanced plan. A lot of people will kind of like against open air for this, but this
might be a hot take. I actually think it's a very proactive way to kind of deal with this situation.
For what we have right now, I think legislation will change eventually going forward. They don't
think it's perfect. I don't think it's ready for AI enabled warfare, but I think it's a good step
in the right direction ultimately. And there was this really awesome comment from open AI's
head of national security Katrina, who kind of explains these nuances and saying that the safety
stack and usage policies that we've set up here is going to be a more reliable one. They called
out anthropic basically saying that it wasn't well thought out and ours is way way better.
The other final cool part about this agreement is that open AI explicitly states to the Pentagon
that they should offer these terms to every single AI model lab. So they're not trying to secure
an exclusive deal. This could be for anyone. And open AI is just the first bundle.
Can we take a moment to just appreciate the fact that open AI, the AI company does have a head
of national security. I think this gets to the core of the the message of this episode is and
the message of this entire narrative this weekend is who is really in control of this and not
like when I say in control in control of everything who is who has the leverage to make the
decisions at the end of the day. And and it seems like they're I mean prior to open AI
signing this deal, it seemed like they were forming this kind of force against the government,
right? This oppositional force where anthropic was like we need this to be safe open AI and say
I'm on one on TV and agreed. Google and a lot of employees from that company in deep mind were
kind of on board. They were saying we're going to draw these hard lines too. We're not working with
you. And it created this interesting power dynamic where they actually did have enough leverage to
inflict damage on I guess matters of national security on the military and limit their ability
to use these prime tools. And it gets into this interesting debate of who should be responsible
for these decisions. I mean a lot of people will say the military they've been elected. They are
the officials. They understand they're held responsible for keeping us safe and protected and
they deserve the best tools. And the open AI and the anthropics in the AI companies they'll say,
but you don't understand how these tools work. You don't know how capable they are. You don't
understand the nuances within them. And we have spent our whole life trying to design these safely.
Therefore, you should trust us to make this decision. And I think it's it's step one. And it's
like event number one in a probably longstanding kind of argument that could happen, which is who
actually holds the leverage over who and is there a willingness to work together or is this going
to be this device of thing where there's a band of private companies and there's a band of public
entities and they're clashing because they they have the same goals, but they are at odds with
how they get accomplished. And I think this was just an interesting moment of time to kind of reflect
on that part in particular. Well, we didn't even mention the craziest part about all of this, which
actually answers your question, which is no one knows. For the actual military operation, epic fury that
was performed over the weekend, it was enabled by Claude after being blacklisted and after being
banned completely ironic. Yeah, it's ironic. So, you know, you had the Pentagon creating this
entire fast diary saying, okay, cool. We'll give up means like you can transition to use another
model. They signed a new deal, $200 million deal with open AI. And then they ended up using the model
which they explicitly banned by the president himself. So it goes to show that there's a lot of
nuance with this. I think Claude had been used for well over six months within the Pentagon right
now. So it's trained on all of its data. It's being used by all of the employees. It's something
that they have here. And technically, they do have another six months to transition to another
model. So it makes sense that they were still using Claude. And it's obvious that Claude is
the current and preferred choice right now. And that'll probably change over the next couple of
months. But yeah, it's a very unsubstantiated or undefined vector moving forwards. I think
US has a lot of angles towards this, meaning they want to upgrade their military offense,
but also they're cautious and curious of the rising and looming threat from China, potentially
taking over Taiwan and a bunch of other things. So they just want to get ahead of these things.
And if they could leverage top American AI model labs to work with them, specifically work with
them, that'll be the advantages that they want. Yeah. And it seemed like it was used in terms of
like the actual implementation for three things. It was for intelligence assessment for target
identification and for simulating battle scenarios. So the AI isn't directly guiding missiles. It's
not doing anything kinetic. It is mostly just for informational purposes. But yeah. And I think
that's where a lot of this discourse comes from. Now Sam had a really interesting AMA work. He was
kind of answering questions too, right? Yep. About kind of the public sentiment addressing them,
doing it live in real time, answering people's questions the night of. He goes, I'd like to
answer questions about our work with the Department of War and our thinking over the past few days.
Please ask me anything. And his three takeaways were super interesting. Number one, he was
surprised at how much 50-50 debate there was between whether warfare in America or national
security should be the judgment of elected officials or unelected private companies. It seemed
like a lot of people were like, yeah, anthropic maybe should have some more involvement here in
setting the guidelines of to what we use AI within warfare. And then a bunch of other people
say, no, we elected officials specifically for this. They should be the ones doing this. The
second biggest takeaway is there's a question around whether companies like OpenA eventually become
nationalized by the government because that technology is so important and crucial towards
things like defense and the economy. And he goes on to say, this is really revealing. He says,
I've thought about nationalization, of course. And for a long time, it seems like it might be better
that building AI was a government project, which kind of shocked me there because I understand
the existential crisis here. But that was super cool. And then the third thing that he states is,
people take their safety for granted. Basically saying that people don't really realize the lens and
extends that the Department of War and defense need to go to to protect them. And this is just a
misunderstanding through public discussion and nuance. Yeah, the government backed project is
super interesting because in the past, when we have done things like this, the Manhattan project,
companies like Lockheed Martin, who had a lot of government support, they've worked very well
because it allows you to kind of move converge resources and talent into a single motive. And you
get the legislative protection to build as fast as possible. The issue now is there's just this
lack of efficiency and capability within those same entities that did this in the past. And the
market forces will not allow it with the amount of capital needed to build these gigantic AI data
centers. You can't extract that from taxes. You can't validate it by printing more dollars.
You actually just have to make revenue and do this in private markets. And I think that's the
slightly uncomfortable truth is that it's just too expensive and too challenging to do as
in any other way. So there has to be this divide between private and public sectors because it's
the only way that you can kind of garner resources this effectively to actually deploy them at
the scale required to build AGI in the first place. Yeah. And those are the take which I thought
was super interesting. Rune asked, are you worried at all about the potential for things to go really
south during a possible dispute of what's legal or not later or be deemed a supply chain risk?
Some other responds, yes, I am. And if we have to take on that fight, we will. But it clearly
exposes us to some risk. I'm still very hopeful. This is going to get resolved. And part of why we
wanted to act fast was to help increase the chances of that. So again, re-emphasizing the point we
made earlier, he's taking the approach of take action now. And we'll figure it out later,
as long as there's certain stipulations from the government saying they'll do it within
lawful use and that there will be a human in the loop that you won't have AIs autonomously
firing weapons at random people because the models just aren't good there. But it's relatively
uncertain. This this land is very uncharted. We don't know where this is going to end up. And to
be honest, it's going to be a very significant debate. Probably for the next couple of years,
I don't think this is going to be a one off event. It is certainly the craziest 48 hours that
we've had in 2026 so far. But it is by no means at its end yet. Yeah, it's been absolutely insane.
And now you are mostly called up on everything that happened this weekend. It was not to end.
I mean, it's to your point, he says, I think it's not a conversation that's going to end here.
I mean, just in the last one in the two months, we've went to win as well. And now I ran and
there's clearly more intent to apply this to the real world. And as these models get more
capable, as they're able to actually do more things, these debates are just going to keep
heating up. But this one was crazy. I mean, I haven't been glued to my phone like this in a long time.
And the plot twist, like this is better than any sort of drama TV show, right? We watched
a deal fall apart. The same person who was backing that deal swooped in and stole it. And then
within hours, the blacklisted AI was used to actually attack another country, even though a new
deal had signed because they still have six months left of contract. And now Dario and the
anthropoctym are upset. And the public kind of supported them. So it went to number one on the
app store. And it's just like, I mean, so much. Can we appreciate how quickly all of this happened
as well? Like, oh man, yeah, like, I got a shout out to X for this because Jesus, like the
information was flowing. The information was flowing. And it came in in real time. Like, I felt
the hours. Like I woke up, I think it was Saturday morning. I went to bed early, like maybe a normal
or lame person. And I woke up and I saw, I think a tweet from maybe you, Josh, that was like,
giving me the breakdown of everything that was going on. And I was like, how did I miss this?
This is like an hour after I went to bed. So breaking every hour. It was amazing. It was
absolutely insane. And it just goes to show that the speed at which AI is accelerating,
not just chatbots, not just video creation, but major important things like national defense,
security should not be understated and should be a focus of topic for probably a lot of other
sectors going forwards. I, I don't know if we're at this point where we want to get into homework
for the listeners here, but I really want to hear from you. What are your thoughts on this
entire debate? Do you think the Pentagon was in the right? Do you think Dario was in the right?
Do you think, open AI and Sam actually struck the right resolution? Or do you think it's all
rubbish and that we need to completely dismantle everything and rebuild from the grounds up?
Let us know your thoughts in the comments. Or even like DMs to us. Like I really want to hear
your feedback. Yeah. And if you want to follow the conversation, we've been monitoring the situation.
We've been publishing the situation. Follow both of us on Twitter or on the
Instagram link in the description below. We've been on it. I think between us, we've gotten like
20, 30, 40 million impressions this weekend. It's been crazy. So that is always where you can see
the news. First, before we get on camera, but we will try to keep you updated. If you have watched
this, congratulations. You're now up to date for now. We'll see where things go through
after us this week. But we have a lot more planned. There's a lot of exciting topics to cover.
And we'll be here with you to cover it all. So thank you, as always, for watching. I very
much appreciate it. Thank you for sharing with your friends, which goes a long way for subscribing
to our sub-stack, which has been doing very, very well. Like there's like 60,000 people that read
every single one. So if you want to get in on the know, click the links down in the description,
share it with your friends. And as always, thank you so much for watching. We will see you guys in the
next one.
Limitless Podcast



