Loading...
Loading...

When AI startup Anthropic refused to let the Pentagon use its Claude model for fully autonomous weapons and mass domestic surveillance, the Department of Defense retaliated by designating the American company an unprecedented "supply chain risk". This standoff highlights a growing crisis as consumer AI systems are rapidly integrated into kinetic military operations and lethal kill chains, accelerating targeting in conflicts like the US-Israeli war on Iran. As the government wields economic warfare and Cold War-era statutes to dismantle corporate ethical guardrails, the tech industry faces a defining battle over who ultimately controls the moral architecture of the world's most powerful technologies.
Sponsors:
Welcome to another deep dive.
It is really great to have you with us today.
Yeah, thanks for tuning in.
If you're listening right now, you are probably someone
who is constantly juggling a million things,
but you refuse to just skim the surface
of what's happening in the world.
You want to understand the massive,
the really structural shifts happening right now.
And you want the nuance behind these terrifying
or I guess triumphant headlines
without having to read 1,000 pages of policy documents.
Which trust us is a lot of reading.
It is a staggering amount of reading.
So you value your time, you are deeply curious
and you're in exactly the right place.
Today we are bringing you a, well,
it's a, did you hear about this kind of conversation?
Absolutely.
And fair warning, it is going to fundamentally change
how you look at that helpful, friendly little AI assistant
sitting on your phone or your desktop right now.
Yeah, the one you probably just used to write an email.
Exactly.
Because we were talking about the sudden, dramatic
and completely unprecedented collision
of consumer artificial intelligence
and kinetic military operations.
It's a profound structural shift
in how power operates today.
We are witnessing this real-time transition
from AI being a tool used to draft corporate memos
or summarize meeting notes to AI being actively integrated
into the kill chain of modern warfare.
And the speed of this transition has cut nearly everyone
completely off guard from the lawmakers in Washington
to the tech industry leaders in Silicon Valley.
To impact this for you, we have gathered
a massive stack of source material.
We're looking at recent deeply researched investigative reports
from the Washington Post, the intercept, and the Guardian.
Some really heavy hitting journalism there.
Very heavy.
We also have the official Department of Defense
Strategy memos from early 2026, public statements
and corporate manifestos from the CEOs
of the major AI companies, and incredibly detailed legal
analysis from organizations like the Stockholm International
Peace Research Institute.
Which we'll probably just refer to as Esperai today.
Esperai, as well as the legal platform law fair.
Now, before we jump into the timeline of events,
I need to offer a very clear, very important disclaimer.
Yeah, we need to set the ground rules for this one.
These sources contain highly politically charged events.
We are going to be discussing military operations
conducted by the United States and Israel, strikes against
Iran and specific administrative directives coming
from the Trump administration.
So we want to be explicitly clear to you, the listener.
Yes.
We are taking absolutely no political sides here.
We are not endorsing any specific military action,
nor are we endorsing any specific corporate or political
ideology.
Not at all.
Our mission today is strictly and impartially
to report the facts, the viewpoints, and the arguments
contained in this source material.
We're just here to help you understand
the technological landscape and the unprecedented standoff
between Silicon Valley and the Pentagon
over who actually controls the ethics
of the most powerful technology on Earth.
It is a complex tangled web of corporate policy,
national security, imperatives, and international humanitarian
law.
It really is.
Our goal today is just to untangle that web
neutrally and thoroughly so you can see the architecture
of what is actually happening behind the scenes.
And before we get into that timeline,
a quick thank you to our sponsor, www.breach.company,
for supporting our ability to bring you these deep dyes.
OK, let's unpack this.
Yes, let's, we have to start with the inciting incident.
Yes.
This is the event that dragged all of these highly theoretical,
you know, academic debates about AI ethics out of the conference
rooms and into a stark, devastating reality.
Right.
We were talking about early March, 2026,
and a United States military operation
called Operation Epic Fury.
The scale described in the reporting
is it's difficult to wrap your head around, honestly.
It is.
According to the sources, the US military
struck over 1,000 individual targets in Iran
within the first 24 hours of the campaign.
Just pause on that number for a second.
1,000 targets in a single day.
Right.
If you think about a traditional human command structure,
finding a target, verifying it, getting a lawyer
to look at the rules of engagement,
getting a commander to authorize the strike,
and then actually putting a munition on that target.
It takes time.
That tempo seems entirely impossible for humans.
How does a military apparatus physically move that fast?
And that tempo is the crucial entry point
into understanding this new era, right?
Exactly.
The human mind and human bureaucratic structures
simply cannot process intelligence,
verify the legitimacy of targets,
and authorize strikes at that volume and velocity.
No way.
The traditional military kill chain,
which is typically summarized as find,
fix, track, target, engage, and assess,
is a very deliberate human intensive process.
But the military was able to achieve this new compressed tempo
because of a specific technological architecture
called the Maven Smart System,
which was developed by the Defense Contractor Palantir.
OK, but here's the crazy part.
The actual brain-powering this system's reasoning,
the engine parsing the sheer volume of data
was anthropics-clod.
Yeah.
Just to be absolutely clear for the listener,
you mean the same consumer-large language model
that people use to help them write Python code,
or draft a marketing email, or I don't know,
plan a weekly dinner menu?
That is the exact model.
It is a large language model, or LLM.
According to the investigative reporting,
Claude was integrated to process vast streams
of classified data.
What kind of data are we talking about?
We are talking about real-time satellite imagery,
massive amounts of communications, intercepts,
logistics movements, radar signatures.
All being fed in the Claude.
All of it.
The report suggests that Claude proposed hundreds of targets,
provided precise geographical coordinates
for those targets, and even evaluated
the battle damage of the strikes in real time.
Wow.
It took a process that would normally
require a room full of highly trained human intelligence
analysts, weeks of painstaking battle planning,
and compressed it into a matter of minutes.
That compression of time is staggering.
But it immediately brings us to the human cost,
which is deeply tragic, and forms the core of the ethical
debate we are going to explore.
Yeah, it does.
The source is detailed a specific strike
during this early March campaign on the Shajera
Taya Bay Girl School, located in the South Iranian city
of Manab.
The reports indicate that 155 students and staff members
were killed.
It's horrific.
And it was reportedly what is known in military terms
as a double tap strike.
For those who aren't familiar, a double tap
means a second munition hits the exact same site
shortly after the first.
Usually right at the moment when first responders, neighbors,
and parents have rushed to the scene
to pull survivors from the rubble.
Exactly.
Now, the Pentagon has firmly refused
to confirm whether Claude specifically nominated
that school as a target.
The Pentagon's silence on the specific chain
of events regarding that school leaves
the direct accountability unconfirmed.
However, the broader integration of AI
into these strike systems in the tempo we just discussed
highlights a well-documented and incredibly dangerous
psychological phenomenon called automation bias.
I think most people have experienced a low stakes version
of automation bias.
Oh, absolutely.
It's like when your GPS tells you to turn down a dirt road.
You're looking out the windshield,
you see it's a private driveway,
or it leads directly into a lake.
And your brain is screaming that it's wrong.
What, you do it anyway?
You do it anyway, because the computer knows best
and the computer has access to satellites.
So who are you to argue with the GPS?
That is the perfect everyday analogy.
Now, scale that psychological tendency up
to a lethal environment.
When humans are partnered with highly advanced machines,
especially machines that operate at a speed far beyond
human cognitive capability, and present their findings
with absolute machine-like confidence,
we have a profound tendency to defer
to the machines perceived authority.
We just abandoned our own critical thinking.
We do.
And we have stark historical parallels documented
in the source material that show how this plays out in warfare.
Look at the investigative reports concerning
Israel's use of AI targeting systems,
which were known as lavender and gospel
during the war in Gaza.
The numbers in those reports were just staggering.
They really were.
According to the coverage, the lavender system generated
37,000 potential targets.
And importantly, it had a known accepted false
positive rate of around 10%.
That per cent.
Yet the human operators tasked with overseeing the system
reportedly spent as little as 20 seconds
verifying these targets before authorizing a strike.
20 seconds.
I want the listener to really picture that scenario.
Yeah, put yourself in that room.
You are a targeting officer.
You are sitting in a classified facility
staring at a screen.
A highly advanced computer system, one that
processes more data in a single second
that you could read in your entire lifetime,
tells you that a specific building is a hostile target.
Right.
It gives you a coordinate.
You have 20 seconds to decide whether to authorize
the use of lethal force.
In 20 seconds, do you realistically
have the cognitive capacity, or even the professional
confidence, to dig into the raw intelligence, cross-reference
the data, and disagree with the machine?
The reality is, you don't.
It creates a structural illusion of control.
And a loop.
Exactly.
The human is technically in the loop.
Their presence satisfies a bureaucratic and legal requirement
that a human makes the final decision on lethal force.
But functionally, the human is just a rubber stamp.
They're just pushing the button.
The machine tells them to push.
The machine is driving the operational tempo,
and the human is just struggling to keep up
with the conveyor belt of targets.
This dynamic, this abdication of human judgment
to an algorithm is the exact scenario
that terrified the leadership at Anthropic.
And that leads us to the unprecedented core conflict
we are seeing between the tech sector and the government.
Which brings us to Anthropic's unique position
in the tech ecosystem.
Right, because they are not just another Silicon Valley startup
trying to move fast and break things.
They were founded specifically on the premise of AI safety.
They championed a developmental framework
called constitutional AI.
And we should define what that actually means,
because it is central to the legal and ethical standoff.
Definitely.
Constitutional AI isn't just a marketing slogan.
It is a technical approach to training a model.
Instead of just having human workers constantly
grating the AI's responses to teach it
what is good and bad, which is a process
called reinforcement learning from human feedback.
R-L-H-F for the tech nerds, let's say.
Right, R-L-H-F.
Instead of just that, Anthropic gave the model
a literal constitution, a set of core written principles.
So the model is trained to evaluate its own outputs
against that constitution.
Exactly.
It is essentially baking a set of core values
into the neural pathways of the system
that it mathematically cannot violate.
And the CEO of Anthropic, Dario Amode,
took those internal values and drew two very distinct,
non-negotiable red lines for any of their military
or government contracts.
Very bold move.
The first red line, no fully autonomous weapons.
There must be a meaningful deliberative human in the loop.
His argument, as quoted in the sources,
is that today's frontier models are simply not reliable enough
for life and death judgment.
Which, given the 22nd rubber stamp issue we just talked about,
is a very valid concern.
Absolutely.
And the second red line, no mass domestic surveillance
of Americans.
He called the use of AI to indiscriminately monitor
citizens fundamentally incompatible
with democratic values.
Those two red lines represent a fascinating assertion
of power.
You have a private corporation attempting
to dictate the moral and ethical architecture of its product
even when the buyer is the ultimate sovereign power.
United States military.
Exactly.
But the Pentagon, operating under Secretary of Defense Pete
Hegseth, fundamentally and aggressively rejected that premise.
They did not take kindly to that.
Not at all.
In January, 2026, Secretary Hegseth issued
an AI strategy memo that completely reshaped
the rules of engagement for defense contractors.
He demanded that all Department of Defense contracts
incorporate standard any lawful use language within 180 days.
I read the excerpts from that memo in the source material.
And the language is incredibly blunt.
It doesn't sound like typical bureaucratic jargon at all.
Now it's very pointed.
It explicitly called for hard-nosed realism.
The memo stated that wokeness, diversity, equity, inclusion,
and social ideology have absolutely
no place in the DOD.
And most importantly for this standoff,
it demanded that AI models purchased by the military
must be completely free from internal usage policy
constraints that might limit lawful military applications.
OK, let's unpack this.
It is a profound clash of two deeply incompatible worldviews.
Yes, it is.
And before we get too deep into that clash,
I want to quickly thank our other sponsor, www.myprivacy.blog
for keeping the show running.
So looking at these worldviews, the Pentagon's position
is rooted in traditional state sovereignty.
Their argument is that national defense
is the core constitutional responsibility of the state.
In their view, they cannot have a private vendor
in California acting as a shadow ethics committee,
vetoing operational capabilities
during a geopolitical crisis.
If the military's legal counsel deems
a specific action lawful under the international rules
of engagement, the technology they purchased with public funds
must be capable of executing that action.
Without the software second-guessing the general.
Let's play devil's advocate for the Pentagon for a moment,
because their argument has a certain brutal logic to it.
It does.
If the military buys a fighter jet from Lockheed Martin,
Lockhe doesn't get to put software in the jet
that says, we don't agree with this particular war,
so the missile systems are locked.
The military buys the tool,
and the military takes responsibility for how it is used.
Why should a software company be treated any differently?
That is exactly the argument the Department of Defense is making.
They view Claude as a munition, a logistical tool.
But Anthropics' counterargument is that an LLM
is not a static tool, like a rifle or a jet engine.
It's an engine of reasoning.
Exactly.
And because it hallucinates, and because it is unpredictable,
the manufacturer retains a moral and structural responsibility
for its outputs.
So let's look at how this standoff escalated
from a contract dispute into a full-blown constitutional crisis.
Anthropics refused to lift its safeguards.
They stood firmly by their red lines.
And the retaliation from the government
was incredibly swift and severe.
The Trump administration ordered all federal civilian agencies
to immediately cease using Anthropics' technology.
But the real hammer blow came from the Department of Defense,
which designated Anthropic as a supply chain risk
to national security.
That specific designation supply chain risk
is not a mere slap on the wrist or a temporary suspension, is it?
Not at all.
In the context of the modern defense industry,
it is essentially the commercial equivalent
of the death penalty for an AI company.
Wow.
Historically, the United States government
has reserved this designation for foreign adversaries
and state-backed entities.
Like who?
The most prominent example is China's Huawei,
where the underlying concern was active espionage,
backdoor surveillance, or the potential
for a foreign government to sabotage American telecommunications.
And they applied that to Anthropic?
Yes.
Applying that exact same national security designation
to an American company, one based in San Francisco,
over an ethical and policy dispute, is entirely unprecedented.
I think a lot of people might hear them think, OK, so
Anthropic loses out on some military contracts.
They can still sell their chatbot to everyday consumers
in regular businesses, right?
How does a government label actually
threaten the existence of an AI company?
To understand why it's an existential threat,
we have to look at the physical mechanics of how
cloud infrastructure actually works.
Frontier AI models, like cloud or open AI's GPT models,
require absolutely massive mind-boggling amounts
of computational power to function.
We are talking about vast data centers filled
with tens of thousands of specialized,
incredibly expensive microchips, usually made by NVIDIA.
Right.
And Anthropic does not own all of those data centers.
No single AI startup could afford
to build that physical infrastructure from scratch.
It's just too expensive.
So instead, they rely heavily on third-party cloud
providers to host and run their models.
Primarily, they rely on tech giants,
like Amazon Web Services, AWS, and Google Cloud.
And AWS and Google Cloud happen to hold billions and billions
of dollars in their own defense contracts
with the government.
Precisely.
The web of defense contracting is deeply interconnected.
Consider the joint warfighter cloud capability contract,
the JWCC.
Right, that one is huge.
That single contract is worth up to $9 billion.
Split among major cloud providers.
Now, if the government legally decrees
that no defense contractor can do business
with an entity labeled a supply chain risk,
companies like Amazon and Google
are suddenly put in an impossible position.
To protect their incredibly lucrative
enterprise-wide defense contracts,
they would be legally compelled to sever their ties
with the blacklisted company.
They would have to kick Anthropic off their servers.
And without that rented cloud infrastructure,
Anthropics models simply go offline.
They cease to exist as a functional service
for anyone anywhere in the world.
It was a structural chokehold.
The government is using the leverage of cloud infrastructure
to starve Anthropic out.
Yeah, but the government didn't just stop it economic pressure.
Secretary Hegseth also threatened to invoke
the Defense Production Act or the DPA against Anthropic.
The invocation of the DPA in this context
is legally fascinating.
The Defense Production Act is a law dating
back to the Korean War era.
It's an old law.
Very old.
It was designed to give the president
the emergency authority to direct private industry
to prioritize the production of goods for national defense.
We saw it used recently during the COVID-19 pandemic
to force factories to build ventilators.
Or historically to prioritize steel shipments
for building tanks.
It's fundamentally about demanding priority access
to existing physical goods.
But attempting to apply a 1950s industrial law
to the ethics of artificial intelligence
is a massive untested stretch of executive power.
I was reading the legal analysis from Law Fair
on this specific point.
And they raised a really profound question.
Are they just asking Anthropic to hand over the software
on a hard drive?
Or are they demanding something much deeper and more invasive?
That is the pivotal legal question
that could define the future of the tech industry.
Exactly.
If the government is simply saying,
we want priority access to use cloud
and we are going to ignore your terms of service
regarding how we use it,
that is one legal battle.
But Anthropic safety guardrails aren't just
a separate piece of software that can be uninstalled.
No, because of that constitutional AI training
we discussed earlier,
the safety mechanisms are mathematically
baked into the neural weights of the model itself.
To give the Pentagon a version of cloud
that is fully capable of directing autonomous lethal targeting
or conducting unrestricted mass domestic surveillance,
Anthropic might actually have to retrain the model
from the ground up.
They would have to actively strip out
the ethical architecture they spent years engineering,
which raises massive first amendment questions.
Can the United States government legally
compel a private group of citizens,
a private company to write code,
to express values and to build a tool
that the company fundamentally morally rejects?
And there is a glaring almost absurd irony here
that the source is highlight.
The government is simultaneously labeling Anthropic
a security risk that is so dangerous,
it must be purged from the entire defense supply chain
while at the exact same time threatening to use
emergency wartime powers to seize their product
because it is deemed too essential
to national defense to lose.
How can a product be both a critical threat
and an essential lifeline at the same time?
It's a profound contradiction.
It reveals the true nature of the standoff.
It forces us to ask a difficult macro level question
about corporate sovereignty and state power.
Does the state have the absolute right
to commandeer a private company's moral architecture
in the name of national defense?
If a group of engineers build a tool
with built-in hard-coded limitations,
specifically designed to prevent harm,
does the government have the legal and moral right
to force them to break those limitations?
That is something I want everyone listening
to really think about.
If a company builds a safety mechanism into a product,
whether it's an AI model and encryption protocol
or a physical device,
does the government have the right
to demand the unsafe version?
Well, as this standoff between anthropic and the Pentagon
reached a boiling point,
the tech industry did what it always does.
It shuffled the deck and looked for an advantage.
Exactly.
Within hours of anthropic being blacklisted
and facing this existential threat to their survival,
a major rival swooped in.
Oh, yeah.
Sam Altman, the CEO of OpenAI,
announced a rushed, highly publicized deal
to put OpenAI's models onto the Department
of Defense's classified networks.
The timing of OpenAI's announcement was impeccable
and the message in was highly calculated.
Altman publicly claimed that the DOD
had actually accepted OpenAI's versions of the red lines.
He stated that OpenAI's technology
would not be used for mass domestic surveillance
and it would not be used to independently
direct autonomous weapons,
which sparked massive confusion
across the entire industry.
I mean, if you are following the logic,
why would the Pentagon completely crush anthropic,
threaten them with the DPA
and label them a national security risk
for demanding these ethical red lines,
only to turn around and agree to the exact same red lines
with OpenAI a few hours later.
It strongly suggests that the dispute with anthropic
was never really about the specific operational policy.
What was it about then?
It was about who holds the power in the relationship.
It was an ideological battle.
The Pentagon needed to establish a precedent.
The military sets the rules,
not the Silicon Valley vendors,
and anthropic tried to dictate terms and hold the line.
OpenAI approached the situation far more strategically.
They engaged in public posturing about maintaining safety
while ensuring they secured the lucrative contract
and embedded themselves even deeper
into the military's permanent infrastructure.
They gave the Pentagon the compliance it demanded
while trying to manage the public relations fall out.
But according to the sources,
that moved seriously backfired on OpenAI
in the court of public opinion.
The reports indicate that over 1.5 million users
dropped chat GPT in protest almost immediately
after the DOD deal was announced.
People were furious.
They really were.
But beyond the user outrage,
the broader institutional tech lobby stepped in.
The Information Technology Industry Council,
known as the ITI,
which represents massive giants
like Google, Microsoft, Meta, and Apple,
started pushing back against the administration.
And it is vital to notice how they pushed back.
The ICI didn't explicitly defend anthropic's ethical red lines.
They didn't take a stance on whether AI should be used
in warfare instead they defended the process.
They framed the Pentagon's actions
as a dangerous arbitrary precedent.
Their argument was that the government
should not be able to arbitrarily decide which company survive
and which are destroyed based on vague compliance standards.
Especially without any clear, transparent appeal process.
The tech giants recognize that if the government
can use the supply chain risk label
to crush anthropic today
over a policy disagreement,
they can use that exact same mechanism
to crush any cloud provider, hardware manufacturer,
or software developer tomorrow.
To really understand how wild this cultural shift is,
we have to look back at the recent history of Silicon Valley.
Let's rewind the clock just a few years to 2018.
And an initiative called Project Maven.
This was the Pentagon's initial push
to use early forms of artificial intelligence
to analyze drone footage.
Google won the contract.
And the internal backlash from their employees was explosive.
4,000 Google employees signed a petition
refusing to build warfare technology.
High-level engineers resigned in protest.
The old corporate motto, don't be evil,
was plastered everywhere.
Ultimately, the internal pressure was so intense
that Google walked away from the contract entirely
and published a set of restrictive ethical AI principles.
That moment in 2018 was widely seen
as a triumph of Silicon Valley idealism.
The workers organized, they said,
no-to-military industrial complex,
and one of the largest corporations on earth back down.
But the military didn't stop building Project Maven.
No, the Pentagon simply found new partners,
companies like Palantir and Enduro stepped in to fill the void.
The military helped cultivate an alternative ecosystem
of defense tech companies that were culturally
aligned with the Pentagon and eager to do the work that Google refused.
And contrast that 2018 Google walk out
with what happened just a few years later.
Microsoft signed a massive $22 billion contract
to provide IVS combat headsets to the US Army.
These are augmented reality headsets designed
to overlay tactical data for soldiers in the field.
Microsoft employees protested, arguing
that the technology turned real-world warfare into a video game
and distant soldiers from the grim reality of killing.
But Microsoft leadership didn't back down.
Not at all.
They completely ignored the employee protests
and pushed forward with the contract.
The takeaway from this historical comparison
is a massive paradigm shift in the tech industry.
The industry has fundamentally moved away
from the utopian anti-war idealism
that characterized really Silicon Valley.
Developing Frontier AI is incredibly unsustainably expensive.
These companies are burning billions of dollars every quarter
just on compute power and electricity.
And the Department of Defense is one of the only customers on the planet
with pockets deep enough to fund that scale of ongoing development.
The tech ecosystem is now heavily
inextricably integrated with the defense ecosystem.
The cultural wall between San Francisco and the Pentagon has completely fallen.
And that deep integration brings us to a terrifying technical reality.
Let's talk about the actual AI models themselves,
the code beneath the controversy.
This is crucial.
I think a lot of people picture AI, even advanced AI,
as essentially a super advanced calculator.
You put in a math problem or a query,
and it searches a database to give you a guaranteed, factually correct answer.
But large language models, whether it is quad, chat GPT,
or Gemini, do not work like that at all.
No, they don't.
They are probabilistic prediction engines.
Right. They do not know facts in the way a database does.
They predict the next most likely word or token
in a sequence based on the massive amounts of training data
they ingested during their creation.
And because they are fundamentally probabilistic, they hallucinate.
They confidently invent facts, synthesize false connections,
and present incorrect information with the exact same tone
of supreme confidence as they present the truth.
So how does a probabilistic engine that occasionally
hallucinates a recipe or a historical date operate in a war zone?
As one researcher quoted in the sources noted,
LLMs are fundamentally brittle.
They are prone to catastrophic error
when they are taken outside the parameters
of their training data.
And you cannot fully capture the chaotic, dynamic,
unpredictable nature of a live war zone
in a static training data set.
Exactly.
The Federation of American Scientists
of the FAS highlighted the specific dangers of this
by introducing a concept they call the lethal trifecta.
The lethal trifecta.
Let's break that down.
What are the three factors?
The FAS argues that putting consumer AI in a military setting
combines three uniquely dangerous factors
that scale the risk exponentially.
Factor one, the AI is granted access
to highly sensitive classified data.
So it is ingesting real-time intelligence
that cannot be publicly verified.
Right.
Factor two, it is exposed to untrusted
or explicitly adversarial content.
In a war zone, the enemy is actively
using disinformation, spoofing radar,
and feeding false data specifically designed
to trick the intelligence gathering apparatus
and by extension, to trick the AI.
And Factor three.
Factor three.
The AI has the ability to trigger external kinetic actions
in the real world.
It isn't just generating a report.
It is nominating a physical target for a missile strike.
When you combine an AI's inherent tendency
for probabilistic hallucinations with adversarial data
and the ability to pull a trigger,
the potential for catastrophic failure is enormous.
Here's where it gets really interesting.
And honestly, pretty scary from an accountability perspective.
Lot of professor Ashley Deeks coined a term
for this specific scenario, the double black box.
The double black box perfectly describes
the accountability void we're stepping into.
Let's look at the first black box.
When Anthropic provides clawed to the military via
a platform like Palantir, the model is physically moved
onto classified military servers.
In DOD terminology, these are impact level five
or impact level six networks.
Right, highly secure air-gapped environments.
Once the model is inside that network,
Anthropic loses all visibility.
They cannot monitor the prompts being fed into the system
by military operators, and they cannot see the outputs
who targets being generated.
That is the first black box, corporate blindness.
The company that built the tool has absolutely no idea
if their safety guidelines are actually being followed
in practice.
And the second black box is on the military side
of the equation.
Exactly.
The military operators, the targeting officers,
and the commanders making the final life and death decisions,
they do not fully understand the proprietary neural weights,
the specific algorithmic biases,
or the exact training mechanisms
of the commercial model they are using.
They are trusting a proprietary corporate algorithm
to parse intelligence and pick targets
without knowing exactly how the machine arrived
at that specific conclusion.
That is military blindness.
So if I'm understanding this correctly,
you have a corporation that doesn't know
how its weapon is being used, partnering with a military
that doesn't fully understand how its weapon thinks.
It is a recipe for disaster,
especially when it comes to international humanitarian law
and the prevention of war crimes.
We are going to dive deep into the international law aspects
of this, but before we do,
let's take a quick moment to thank both of our sponsors,
www.breach.com and www.myprivacy.blog
for keeping the show running.
If we connect this double black box concept
to the bigger picture,
we have to look at the findings
from the Stockholm International Peace Research Institute
or CPRI.
They released a profound, deeply research report
analyzing how inherent bias in military AI
directly threatens compliance
with international humanitarian law or IHL.
Algorithmic bias is something we hear
about frequently in civilian life.
We've seen stories about an AI resume screener
favoring male applicants over female applicants
or facial recognition software failing disproportionately
on people of color because of biased training data.
But how does that kind of civilian algorithmic bias
translate to a kinetic war zone?
In a war zone, demographic bias regarding gender, ethnicity,
or culture in training data,
doesn't just lead to an unfair job rejection
or wrongful arrest.
It leads to lethal targeting errors and war crimes.
Let's look at a specific hypothetical example
that Sipari provided to illustrate this danger.
Imagine an AI targeting system that has been trained primarily
on historical combat data from past counterinsurgency conflicts.
In that historical data, the enemy combatants
were predominantly young men who moved in groups,
carried rifles, and communicated using encrypted apps
on prepaid cell phones.
OK, so the AI learns that statistical pattern.
Young men, plus guns, plus prepaid phones,
equals hostile combatant.
Precisely.
The machine learns the pattern and applies it ruthlessly.
But what happens when that exact same AI system
is deployed in a different region
with a completely different cultural context?
It would know the difference.
The AI might scan a rural village via drone footage
and identify a group of young men carrying rifles
and prepaid phones.
The system flags them as an immediate high confidence threat.
But because the AI has no understanding of human culture,
it completely misses the context that these men
might be civilians participating in a traditional festival
or a local hunting party, where carrying a weapon
is customary and legal.
The AI lacks the human context, but it
presents the target to the human operator
with machine-level mathematical confidence.
This is a 99% match for a combatant.
And that directly violates the core tenets
of international humanitarian law, doesn't it?
It violates several foundational principles of IHL.
First is the principle of distinction.
International law requires that warring parties must constantly
and effectively distinguish between combatants and civilians.
If your AI system is inherently biased
by its training data to view all military-aged males
of a certain ethnicity as hostile threats,
it fundamentally fails the distinction test.
Second is the principle of proportionality.
A commander cannot legally authorize a strike
if the expected civilian harm outweighs the anticipated
military advantage.
If the AI hallucinates the strategic value of a target
or underestimates the civilian presence in a building,
your proportionality calculation is legally and morally void.
And the third principle, which seems the most relevant
to the 22nd verification wonder we talked about earlier,
is precautions in attack.
Yes.
Commanders had a strict legal mandate
to do everything feasible to verify
that a target is actually a military objective before striking.
If you are relying on what Palantir's own chief revenue
officer called a jagged intelligent,
meaning an LLM that lacks a basic grounded understanding
of reality.
Exactly.
To do your verifying, you are failing
your duty of care as a commander.
You are outsourcing your legal responsibility
to a black box you don't understand.
Which brings us grimly back to the Shajarid Tayebe girl's school
tragedy in Iran.
If an AI system generated that target based
on a flawed statistical pattern and a human operator
rubber stamped the strike in 20 seconds
because they trusted the machine's authority,
who is legally held accountable for those 165 lives.
Is it the AI developer at Anthropic who trained the model?
Is it the tech CEO who signed the DOD contract?
Is it the 24-year-old targeting officer
who clicked approve under immense pressure?
Or is it the general who mandated the use
of the maven system in the first place?
The terrifying reality is that international law
is not currently equipped to assign accountability
in this new algorithmic kill chain.
The diffusion of responsibility
allows everyone to point the finger at someone else.
The soldier blames the machine,
the general blames the contractor,
and the contractor claims they didn't know
how the machine was being used.
That lack of accountability extends far beyond the battlefield
and reaches right into our own homes.
We need to shift gears and look at Anthropic's second red line,
which was mass domestic surveillance.
The Department of Defense's insistence
that AI models be available for any lawful use
isn't just about dropping bombs and foreign conflicts.
It opens a massive terrifying door
regarding the privacy of everyday citizens.
The intelligence community and the military
have long desired the ability to process bulk commercial data.
We're talking about geolocation records
bought from commercial data brokers,
massive troves of web browsing histories,
financial transactions, and communication metadata.
For human analysts to manually sift through that scale
of data to find patterns is practically impossible.
But an advanced, large language model
can ingest millions of disparate records
and instantly map out the networks, associations,
political leanings, and daily routines
of millions of everyday citizens.
Dario Amade was very clear that he viewed this
as incompatible with democratic values.
If the military and the intelligence apparatus
can force an AI company to drop its guardrails
under the banner of national security,
what actually stops them from using a model like Claude
to continuously monitor the digital footprint
of every American without a specific warrant.
They could easily argue that analyzing commercially purchased
data is a lawful intelligence gathering activity.
And this synthesizes the macro level threat perfectly.
The standoff between anthropic and the Pentagon
is not just an arcane contract dispute
about software licensing.
It is a profound crisis of democratic erosion.
Right now, there is absolutely no detailed
comprehensive congressional regulation governing
the use of autonomous weapons,
or the integration of generative AI
in the military kilching.
Congress, the body that is supposed to represent the people,
has not written the rules for this new era
of algorithmic warfare.
So in the total absence of legislation,
the executive branch, specifically the defense department,
has essentially stepped into the void
and decreed that AI safety is subordinate to state power.
By breaking anthropics resistance
through severe economic threats
and the invocation of the Defense Production Act,
the government is making a definitive statement.
Private ethical guardrails are irrelevant
when national security is invoked.
It sets a chilling precedent.
It signals to the entire tech industry
that absolute compliance with the military's demands
is the only path to economic survival.
And it completely sidelines the public
and our democratic institutions
from the most important debate of our time.
We are allowing administrative memos,
closed-door ultimatums, and classified contracts
to decide the moral and ethical boundaries
of the most powerful technology ever created by humanity.
Which leaves you, the listener,
with a massive defining question for the 21st century.
Who should write the laws for these digital armies?
Should the moral architecture of AI
be decided by unelected tech CEOs
in Silicon Valley conference rooms?
Should it be decided by military generals
in the Pentagon operating in secret?
Or should it be decided by the public
debating and legislating through
transparent democratic institutions?
Because right now, the democratic institutions are silent
and the generals are winning the argument by force.
The speed of technological advancement
has entirely outpaced the speed of our legislative process.
We are operating in a vast legal and moral gray zone.
And the precedents being set right now
like legally designating a domestic AI safety company
as a national supply chain risk
will define the architecture of global power
for decades to come.
Let's quickly recap the astonishing journey
we have taken today, because we've covered a lot of ground.
We started with the jarring reality of helpful
everyday consumer chatbots being integrated
into the Palantir Maven system to coordinate
a massive 1,000 strike military campaign in Iran.
We impact the tragic consequences of that speed,
exploring automation bias and the horrific human cost
of algorithmic warfare.
We examined the intense unprecedented showdown
between anthropics ethical red lines
and the Pentagon's absolute demand for unrestricted access,
leading to threats of economic destruction
via cloud infrastructure and the Defense Production Act.
We watched the tech industry shuffle its loyalties
with open AI rushing into secure contracts,
highlighting the historical shift
from Google's 2018 anti-war protests
to today's deeply integrated defense tech complex.
And we navigated the incredibly complex ethical tightrope
of the double black box,
the failures of international humanitarian law
regarding algorithmic bias and the looming threat
of mass domestic surveillance.
It is a massive amount of information to process.
It is a lot, but I want to leave you
with a final lingering thought to mullover,
one that builds on everything we've discussed today
but brings it right back to your daily life.
Okay, let's hear it.
We often hear the phrase that data is the new oil
or in the military context, data is the new ammunition.
And these consumer AI models, the clogs, the chat GPTs,
the models you use every day are becoming the new arsenals.
Millions of us interact with these models daily.
We ask them to fix our code, we ask them to write our emails,
we ask them to brainstorm our business strategies,
or explain complex topics to our kids.
Every single time we do that,
we are providing human feedback.
We are actively teaching the model,
how to reason better, how to parse nuance,
and how to solve complex problems more efficiently.
Right, we are constantly fine-tuning the engine
with our own intelligence.
So here's the provocative question to consider.
As we feed these models, our everyday thoughts,
our writing, and our human problem solving strategies
to make them smarter and more capable,
do we, as everyday consumers,
unknowingly become active contributors
to the military supply chain?
By helping to refine the reasoning capabilities
of a consumer chatbot,
are we inadvertently training the very systems
that will be deployed to the battlefield tomorrow?
Are we in a small, but very real way,
crowdsourcing the intelligence
that will define the future of global conflict?
Wow, that is a heavy, fascinating thought.
Are we all acting as unpaid engineers
in the algorithmic kill chain?
That is absolutely something to think about
the next time you open up a tab and ask an AI
to summarize a PDF or draft an email for you.
Thank you for joining us on this deep dive.
Keep a question in the algorithms around you,
keep paying attention to the gray zones,
and we will see you next time.
Thank you.
Thanks so much for joining us on this deep dive.
Keep questioning the algorithms around you,
keep paying attention to the gray zones, and we will see you next time.

CISO Insights: Voices in Cybersecurity

CISO Insights: Voices in Cybersecurity

CISO Insights: Voices in Cybersecurity
