Loading...
Loading...

In recent weeks, the Defense Department has tussled with Anthropic over how its artificial intelligence could be used on classified systems. That fight became bitter and negotiations fell apart. And war in the Middle East has made it increasingly clear how much the U.S. military has been relying on A.I.
Sheera Frenkel, who covers technology for The New York Times, explains the standoff and what it reveals about the future of warfare.
Guest: Sheera Frenkel, a New York Times reporter who covers how technology affects our lives.
Background reading:
Photo: Brendan Smialowski/Agence France-Presse — Getty Images
For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.
Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This podcast is supported by Pharma.
America leads the world in medicine development.
It matters.
We get new medicines first, nearly three years faster.
Five million Americans go to work because we make medicines here at home.
And not relying on other countries keeps us safe.
But China is racing to overtake us.
Will we let them?
Or will we choose to stay ahead?
When America leads, America cures.
Let's tell Washington to keep us in the lead.
Learn how at AmericaCures.com.
From The New York Times, I'm Natalie Kittrelf.
This is The Daily.
As the US bombardment of Iran has escalated,
it's become increasingly clear just how much the US military
has been relying on sophisticated artificial intelligence.
And that's made the Defense Department's bitter fight
with the AI giant anthropic over who controls that technology,
one of the most high-stakes strategic battles of our time.
Today, my colleague, Shira Frankel,
on the standoff between the Trump administration and anthropic,
and what it really reveals about the future of warfare.
It's Monday, March 9.
Shira, it's wonderful to have you back on The Daily.
Thank you for having me.
So, as this war in the Middle East has progressed,
we've been hearing more and more about the US using AI
and its attacks on Iran.
It's one of the first times really where this technology
is very clearly having a practical application for the US military.
We are seeing it in action.
And at the same time, in the background,
there has been this ongoing bubbling battle
over the use of that technology.
So, we're going to get into the specifics of all of that.
But first, can you just lay out what this fight is fundamentally about?
Well, this fight is so much bigger than one company
and this particular moment with the Pentagon.
It's really about the future of warfare
and the role that AI is going to play in war.
Right now, in the Middle East, as the US looks for targets to strike,
it is using Anthropics technology to analyze intelligence,
analyze satellite imagery, and figure out where it wants to hit.
AI can analyze data for the military faster than a human being possibly could.
It's proving its worthiness every single day.
And so, in a sense, these private technology companies
based in Silicon Valley and the Pentagon need each other more than ever.
But there's a question about how they're going to work together
going forward, as we all hurdle towards this vision of robot wars,
of AI-backed weapons, fighting AI-backed weapons.
They're trying to figure out who gets to say what's safe and what's not.
So, on one side, you have these private Silicon Valley companies.
You have Anthropic, which is the first AI company that was authorized
to work on classified US military systems.
You have OpenAI, which is this behemoth of AI companies.
You have long-standing companies like Google and Microsoft,
which have AI divisions.
So, you really have a number of very powerful companies in the valley
that want to do business with the Pentagon,
and are, in some cases, doing some business with the Pentagon,
figuring out how to navigate that relationship.
And on the other side, you have the Pentagon,
which is thinking about this global AI arms race against China,
Iran, and Russia, and how America is going to fair in that.
And just to get a lay of the land here,
can you just explain how the Pentagon
is broadly making use of this technology?
What function it plays?
So, right now, AI plays a huge role in what's called
SIGINT, or SIGNAL's intelligence.
What I mean by that is that the military at any given time
is ingesting an incredible amount of data,
text messages, postings on social media pages, phone calls,
all of this is intelligence that's gathered by the military
and then used to make critical decisions.
Now, in the past, there was a room full of human beings
that would have to sit there and analyze all this intelligence.
But now, we have AI, and this is exactly what AI is really good at.
It ingests data, and then it tells you,
here's an important note you should take out of this.
Here's my summary.
Here's one phone call that's better than all the other phone calls
you should actually be listening to.
And so, this is critically important right now
in the Middle East, where we're seeing this AI technology being used.
But spinning forward, it's only going to become more important
as AI gets better and better, and the military wants to integrate it
into more parts of its weapons arsenal.
Okay, so a hugely important debate happening
at a very important time.
Just orient us, Shira.
How did this whole fight start?
It actually starts in this very positive, optimistic way
in that the Pentagon issues a call out last year
saying it wants to introduce AI.
It invites all these AI companies to basically come into the military
and show them how they can be helpful.
How can the Pentagon, the Department of Defense,
start integrating AI into its own systems?
And they immediately get a lot of takers.
You've got Silicon Valley's biggest AI companies,
Google, XAI, and Thropic, and OpenAI,
all raise their hands and say,
we want to participate, we want to work with the Pentagon.
And of all the AI companies that begin working with the Pentagon
and Thropic emerges as kind of the best
and the most seamlessly integrated into the Pentagon systems.
It's working with Palantir, this data analytics company.
It's one of the only ones that is approved
to work on classified systems.
And so people across the DOD tell us
that it really quickly became absolutely fundamental
to their work and made their lives easier.
Okay, so I just want to pause here
because from what I know of Anthropic,
this is a company that brands itself
as the socially responsible AI company,
the company that emphasizes AI safety a lot.
And so it's just kind of interesting to me
to hear that they were the first ones
to be so embedded within the US military.
That's true.
This is a company that was founded
by people who left OpenAI
because they wanted a safer AI company.
They said they wanted more safeguards.
I mean, this is their entire premise
and how they draw employees to work there.
What they also are, however,
is a company that really believes
in working with the government.
We've seen their top executive say
that they think AI can make our country safer.
It can help the US military defend against adversaries.
They are, by all accounts, deeply patriotic as well.
And so while the two things don't seem to naturally go hand in hand,
I think in the minds of their chief executives,
at least from people that are sitting in the room with them,
they say, yes, they wanted to work with the government
and they thought they could be the ones to do it safely.
Okay, so that explains why at this point in the story,
all sides are working well together.
When do things start to change?
Things start to change on January 9th
when the Secretary of Defense Pete Hexeth
comes out with this pretty big memo.
And he tells the military,
he tells everyone across Silicon Valley
that things are about to change.
AI is critical for the future of warfare.
China's developing AI weapons,
Russia's developing AI weapons,
if the US wants to be competitive,
AI has to be at the center of everything
from autonomous weapons like drones or fighter jets
that have no pilots to data systems.
And this kicks off a need for new contracts
with all the AI companies and they do what companies do.
Their lawyers start sending contracts back and forth
with the Pentagon's lawyers trying to figure out
how they can come to some sort of new agreement about this.
And how does that go?
They have differences, they have things
that they're trying to figure out,
but it's all sort of happening quietly behind the scenes.
When all of a sudden something happens
that ends up escalating tensions
between Anthropic and the Pentagon.
News reports emerged that Anthropic's claw technology
was used as part of the capture of Nicholas Madura
than as well as leader.
Right, I remember when that came out,
it was this surprising moment to find out
that an AI model was used to do something like that,
like this very on-the-ground operation
that involved boots on the ground
and lots of planning, AI was in the middle of it.
Yeah, I mean, I think it was even surprising,
confusing for people who work at Anthropic,
who did not know if their technology was used
in the Madura raid.
It even came up in a meeting that happened
between one employee at Anthropic
and another employee at Palantir,
then Anthropic guy asked, do you know anything about this?
You know, is our technology being used?
It was not something that they appeared aware of.
But whether or not Anthropic's technology was used
at the Pentagon, the fact that a private Silicon Valley company
would even be raising questions about this
was seen as inappropriate.
You had the Secretary of Defense,
telling people around him that he didn't like Anthropic,
even asking questions about how their technology was being used.
And in the midst of all these kind of sensitive negotiations
happening about the future of Anthropic and the Pentagon,
this was kind of the kindling that they didn't need.
So basically, the Defense Department sees this inquiry
by this employee at Anthropic as a sign
that the company is challenging the military's use
of the technology.
Yeah, exactly.
They see it as a sign that this private company
that's talked a lot about safety is going to try
and impose its own rules, its own guardrails,
its own ideas of safety onto the Pentagon.
And in the midst of all these sensitive negotiations,
it suddenly becomes a crisis.
It suddenly spills over from emails back and forth
between lawyers to big public statements
by senior figures at the Pentagon.
And what is the crux of the crisis itself?
The crux of the crisis is over Anthropic wanting
to define safety and wanting to limit two specific ways
in which the Pentagon can use their technology.
They want it codified into their contract with the Pentagon
that their technology will not be used
for the mass surveillance of Americans
and it will not be used for autonomous weapons.
And why has Anthropic drawn those red lines
on these uses of AI?
What's the rationale here?
Well, they're worried about a few different things here.
First and foremost, they're not sure that AI is ready.
AI might have a 1% or 2% error rate,
but when it comes to something like picking a target
to hit with a missile,
that kind of error rate could mean life or death.
Right, huge consequences.
Huge. Now imagine, secondly, the PR disaster.
If a news story comes out that Anthropic's AI was used
to hit a target that ended up being wrong,
suddenly this company has an absolute PR nightmare
on their hands where Americans are contending
with this very real life use case where AI
or in science fiction books,
they always say the robot, it chose the wrong target
and humans were killed.
And thirdly, they've got to worry about their own employees.
People who work there are not comfortable
with working with the military.
People who work there are worried about the use of AI and war.
They really risk alienating a lot of the people
that they paid a lot of money to come work at that company.
Right, it's worth saying that these employees are very valuable.
There's a total talent war on to attract these people
and you don't want to risk losing them.
Yeah, that's right.
Some of the most highly sought after engineers
across Silicon Valley and that's saying a lot.
We're talking about contracts potentially
with tens of millions of dollars to acquire some of these people.
Got it.
So it sounds like there is a broad set of reasons
why Anthropic is not wanting to do this.
What about the Pentagon?
What do they make of this?
The Pentagon is mad.
They're sitting there and saying,
hey, you are a private company.
You do not get to make these calls.
Whoever decides that AI is ready to control the weapon
should be sitting here in the Pentagon, in the military,
we are the ones that make these calls
and really how dare you as their view as a private company
try to tell us how to build our weapon systems.
They're saying it's not your role.
It's our role.
That's our job.
Exactly.
And the Pentagon is saying we are going
to implement all lawful uses of this technology.
So they're making the argument that Anthropic is really
asking for something that isn't necessary.
So things escalate and escalate.
And they result in this meeting between the Secretary
of Defense Pete Hexf and the Chief Executive
Anthropic, Dario Amide.
The CEO of one of the biggest AI companies in the world
is meeting with Defense Secretary Pete Hexf today
as the Pentagon threatens to essentially blacklist
that company anthropic from lucrative government contracts
if the AI company is civil for the most part until the very end.
Defense Secretary Pete Hexf gave CEO Dario Amide
until the end of the week to sign a document
ensuring the military would have full access
to the company's AI model.
The Secretary tells Dario Amide, hey,
you have until Friday, 5 p.m. Eastern time to compromise,
work it out, figure it out, but we are giving you
a hard deadline, or we're going to take
some type of action against you.
And what is the action?
What's the threat?
So there's actually there's two threats made against Anthropic
and they're pretty opposed to one another.
One is that Anthropic will be labeled a supply chain risk.
And this is a designation that America has used in the past,
mostly for foreign companies who produce something abroad
and which America feels is not safe
for national security reasons for the government to be buying.
So they would be essentially saying,
hey, Anthropic, we think you're dangerous as a company
for national security and nobody in the government can use you.
The other threat would see them invoke
this Defense Production Act, which labels a company
so necessary to national security
that they have to work with the federal government.
These seem like pretty extreme threats.
I mean, the government is saying,
we're either going to force Anthropic to comply
or inflict a ton of pain on this company
by punishing anybody else that does business with them, essentially.
Yeah, I mean, they are extreme and it leads
to this rare moment of solidarity across Silicon Valley.
These companies who usually, I mean, quite honestly,
hate each other suddenly come together
and they say we stand behind Anthropic,
the AI community stands behind Anthropic and their red lines.
And I think of all the voices that emerged,
the most interesting is Sam Altman,
who's the chief executive of OpenAI.
He historically has not gone along with Anthropic.
These are a bunch of guys that left his company
and said his company wasn't safe
and started their own company.
There is no love lost between the leadership at OpenAI
and the leadership at Anthropic.
And he even stands up and he says,
no, no, I back them, I back Anthropic.
And here we should just disclose for transparency
that the New York Times is currently suing OpenAI
over the use of its models.
That's right.
So all a Friday tension is building.
People are tweeting in support of Anthropic.
They're telling the company to hold the red lines
and Anthropic's executives, their lawyers are on the phone.
I mean, minutes, minutes before the deadline hits.
They're still on the phone with the Pentagon
trying to figure this all out.
And then the deadline happens, 14 minutes pass
and two things quickly happen.
Now to a major development in the clash
between the US Department of Defense and Anthropic,
President Trump has ordered the federal government
to stop using its technology.
After the AI firm refused to let go.
One is that the DOD announces there is no deal.
Defense Secretary Pete Hegseth says
he will designate Anthropic
a supply chain risk to national security.
Anthropic is a supply chain risk.
It's going to be booted, banned
from the entire federal government.
Saying any contractor that does business
with the US military will not be allowed
to conduct commercial activity with Anthropic.
President Trump called Anthropic a radical left woke company,
which will not dictate how the United States fights
and wins wars.
And then they issue another surprise.
They actually have an ease in their back pocket.
Anthropic's relationship appears to have ended,
but open AI is ready to make a deal.
This whole time in the background,
they have been quietly negotiating directly
with Sam Altman, the chief executive of open AI.
Wow.
And this whole time, he's been negotiating himself
directly with the Pentagon.
And Sam Altman says that he got exactly the deal
that Anthropic wanted, but he had actually decided
to take a very different approach
to the entire negotiation.
We'll be right back.
This podcast is supported by BP.
Behind the delivery trucks that keep your life stocked,
thousands of BP employees go to work every day.
People discovering oil and natural gas on shore and off.
People refining it into products you rely on,
people shipping fuel, where our customers meet it,
and people helping drivers fill up
at our convenient locations.
They're part of around 300,000 jobs
we support across the country.
See all the ways BP is driving American energy forward
at dp.com slash investing in America.
This podcast is supported by Farma.
America leads the world in medicine development.
It matters.
We get new medicines first nearly three years faster.
Five million Americans go to work
because we make medicines here at home.
And not relying on other countries keeps us safe,
but China is racing to overtake us.
Will we let them or will we choose to stay ahead?
When America leads, America cures.
Let's tell Washington to keep us in the lead.
Learn how at AmericaCures.com.
Your call is important to us.
Please hold for the next non-existent representative.
Ever get a pit in your stomach
when you realize your afternoon is about to become
a bureaucratic cage match over a bill?
Someone will be with you shortly.
Skip it.
Save time by letting experience negotiate your bills
and look for new deals and savings opportunities.
Get started in the experience app now
and see how much you could save.
Results will vary, not all bills eligible,
savings not guaranteed available
with eligible paid memberships
to experience.com for details.
Thanks for being.
Okay, Chira, you said that Sam Altman
took a much different tack with the pentagon
in these negotiations.
What do you mean by that?
So Anthropic had been asking this entire time
for certain things to be codified into their contract.
They wanted established that their technology
could not be used in these very specific ways
that were important to the company.
What Sam Altman did was say,
hey, we don't need that type of language into the contract.
What we're going to do is write our own guardrails,
our own safety measures into the code itself.
Engineers call this writing into the stacks
and it's something that AI companies do all the time.
They update their safety measures.
They quote, write into the stacks guardrails
that they think are important.
And so he's saying, it's not on you,
it's on us, whatever's important to us,
whatever safety measures we have as open AI,
we are going to make sure are there.
And just explain why that version of things
where the company is in control of writing
these safeguards into the models,
why that wasn't good enough for Anthropic.
People who work at Anthropic make the argument
that when you write something into the stacks,
it can be unwritten.
You can write something else the next day.
It is not permanent.
These stacks get changed daily.
They could even be changed hourly.
And in their view, there was not enough
to stop the Pentagon from saying, okay,
well, you wrote that into the stacks today,
but tomorrow we're telling you to do something else.
Essentially, you're saying their fear is that
this kind of guardrail is much more movable.
It's not permanent enough.
It doesn't guarantee that the limits
will be respected long term.
Exactly.
So the Pentagon came out of this winning, it sounds like.
I mean, I think that from their point of view
from the DOD folks we've talked to,
they are happy they got open AI on board.
I think that where the Pentagon may run into problems
long term is the broader AI community in Silicon Valley
and how this is really brought to the forefront,
this bigger question of AI and weapons,
AI and the government, is AI going to be dangerous
and is the government thinking about it in a responsible way?
I think that whole debate is now in the public consciousness.
Right, and I have to imagine that the extent
to which this administration was willing
to really throw the book at this American AI company,
but has to have had something of a chilling effect
in the industry, right?
Oh, definitely, I spoke to someone who works at Google
who said, you know, that's terrifying.
If they can threaten to label and throbic a supply chain risk
or to use this defense production act against them,
what's to stop them from doing it to any tech company
in Silicon Valley if they don't get their way?
And so there's been this moment of trust building
between Silicon Valley and the Pentagon
that's happened slowly over the Trump administration
and we've really seen a lot of that shattered
in the last week or so.
And what about the companies at the center of this, Shira?
Like, how do they net out?
Because obviously, open AI has this victory
in terms of getting the contract,
but at the same time, it's hard to ignore the PR benefits
that have come out of this for anthropic.
This company was very popular among software engineer types,
but before all of this, it was by no means well-known
among the general public.
And now all of a sudden anthropic is this topic
of national conversation, right?
I mean, we saw that in the immediate aftermath of all this
andthropic's claw technology shoots to the top of the app store
for the first time in the company's history.
They have not just become a household name,
but they've become a household name
that's synonymous with security, safe AI.
And that's a huge PR win in a moment
where so many people are still afraid of AI.
Right, you're saying it's not just that people
are talking about the company.
It's that they're talking about it as a company
that values safety and responsibility.
And you can see why that might be appealing.
That's right.
Out here in Silicon Valley, I think inthropic
is really emerging as a winner
in terms of the PR battle for the hearts and minds of engineers.
And right now inthropic is really being seen
as an ethical company that stood by its guns
and did what it said it was going to do
in terms of safety measures.
And here in Silicon Valley engineers are talking
about how they want to go work for them.
And so that could net out really
as a big win for Anthropic.
After Altman signed the deal,
there was a lot of blowback across Silicon Valley
for the terms that he had reached with the Pentagon.
I actually saw people in the streets of San Francisco
holding up a sign saying Anthropic stands strong.
Wow.
And you see online people who work at these companies
voicing both support for Anthropic
and dismay with open AI.
And that pushback from engineers
has complicated things for Sam Altman.
He's had to meet with his own employees more than once
to assure them that he's going to seek
a safe contract with the Pentagon.
And he's had to do a lot of kind of internal PR work
among people at his company.
To try to do damage control,
it sounds like with his own employees.
Exactly.
And we've seen him announce subsequently
that he may have made a mistake rushing too quickly
into a deal with the Pentagon.
And that he's actually sought new language now
around the master valence of Americans
and other assurances so that his employees
will not be as upset as they have been in the last few days
about this contract with the Pentagon.
So where this stands now is that you have two of Silicon Valley's
largest companies basically battling it out
over what safe AI looks like.
On one hand, you have Sam Altman, open AI
and his version of working with the Pentagon.
And on the other, you have Dario Amide and Anthropic sort of saying,
this is how we think safe AI should play out.
And Chira, through all this,
it's clear that both companies are trying to win the optics
battle in all of this.
Both are claiming the mantle of safety,
asserting or reassuring people, their own employees,
that that's what they care about.
But I just want to push on what they actually mean by that,
by safety, because when we were talking earlier
about the red lines, Anthropic insisting
that its model shouldn't be used
for master valence or autonomous weapons,
they were saying their models just aren't ready yet.
They're still error prone.
And so it sounds like they're arguing,
it's not safe to use their model in those ways now.
But do you think these companies are opposed
to those models being used for master valence,
for autonomous weapons ever?
No, I think ultimately these companies are well aware
that the way the world is headed
is that AI is going to be at the center
of pretty much everything the government does
from surveillance to weapon systems.
AI is going to play a role.
You also have to remember these companies are really competitive.
They're technologists who love what they do.
They love the future of AI.
And so there's also sort of a personal vested interest
in making the AI good enough to play this really central role
across the government.
Right, I mean, and there's billions at stake,
we should say, in this industry being invested,
these companies are locked into competition
with each other.
And there's no going back is what you're saying.
There is no going back when you speak
to some of these technologists,
they describe what the world looks like in the future.
And honestly, depending how much sci-fi you've read
in your life, that is a very attractive vision
or a really scary vision of the future.
So they look forward and they imagine a war
in which there's no human soldier on the battlefield.
We're back in Washington or wherever on some military base,
there's a guy with a headset
who's controlling a fleet of drones
or submarines or fighter lists jets.
And they're fighting against another nation state
which has very much the same.
The surveillance of all these targets
is happening through AI systems
that can comb through imagery faster than the human brain
can process a single photograph.
And all these decisions are happening at lightning speeds.
That's what they see all of us kind of hurtling towards.
What you're saying is this fight that we've been describing
between anthropic and the Pentagon and open AI,
it didn't actually forestall the future.
In some ways, it just made clear to everyone that it's coming.
That's right.
They are all clear that it's inevitable
and what all these companies agree on,
what the Pentagon agrees on is that they're all active partners
in making this a reality.
Sharah, thank you so much.
Thank you for having me.
We'll be right back.
This podcast is supported by Earth Justice.
The Environmental Protection Agency has a legal obligation to protect Americans from dangerous
pollution.
But the Trump administration is abandoning its responsibility by repealing the endangerment
finding.
For 17 years, the endangerment finding has affirmed what Americans can see with their own
eyes.
That climate pollution threatens communities, public health and safety, and the economy.
With justice, Environmental Defense Fund, Sierra Club, and the Natural Resources Defense
Council are taking the Trump administration to court.
Join the fight at earthjustice.org slash danger.
This podcast is supported by BP.
Behind every BP fill up, thousands of people across America go to work every day.
From the people producing oil and gas in the Gulf today, to those discovering resources
we'll need tomorrow, to the people refining our fuels, all the way to the people who help
you at one of BP's family of retail stations.
They're part of around 300,000 US jobs BP supports across the country.
See all the ways BP is driving American energy forward at bp.com slash investing in America.
Self-directed investing, trading, full service wealth management, automated investing, financial
planning, the medic investing retirement planning, whoo!
And to think that's just a small taste of what Schwab offers.
Because Schwab knows when it comes to your finances, choice matters.
No matter your goals, investing style, life stage, or experience, Schwab has everything
you need all in one place.
So you can invest your way.
Visit Schwab.com to learn more.
Here's what else you need to know today.
Iran has named a new Supreme Leader, Moshtaba Hamane.
Hamane is the 56 year old son of the recently killed Supreme Leader, and his appointment
signals the government's desire for continuity.
Hamane has been coordinating military and intelligence operations at his father's office,
and he has very close ties to the powerful Islamic Revolutionary Guard Corps.
President Trump has called the younger Hamane an unacceptable choice.
Before the announcement, Trump told ABC that whoever is selected as Iran's next leader
is, quote, not going to last long without the approval of the United States.
And over the weekend, the US and Israel intensified their attacks on Iranian military targets
and vital energy infrastructure.
Israeli warplanes bombed several fuel depots in and around Tehran, saying they were being
used by Iran's military.
The airstrikes created an apocalyptic scene in the capital, setting off oil fires that
turned the horizon orange and blanketed the city with dark, oily smoke.
Water desalination plants were also struck in Iran and on the Persian Gulf Island of
Bahrain, threatening to further disrupt the lives of millions in the region who depend
on desalination for drinking water.
Finally, on Sunday evening, oil prices surged to over $100 a barrel for the first time
in four years, a worrying sign about the war's potential effect on gas prices.
Trump said in a truth social post on Sunday that higher oil prices would be short-lived
and called them, quote, very small price to pay for peace.
Today's episode was produced by Ricky Navecki, Rochelle Bonja, Diana Wynn, Eric Kruppki,
and Michael Simon Johnson, with help from Mary Wilson.
It was edited by Mark George and Lisa Chow.
It was music by Marion Luzano, Rowan Nemistow, and Dan Powell.
Our theme music is by Wonderly.
This episode was engineered by Alyssa Moxley.
That's it for the daily.
I'm Natalie Kitroweth.
See you tomorrow.
This podcast is supported by Farma.
America leads the world in medicine development.
It matters.
We get new medicines first, nearly three years faster.
Five million Americans go to work because we make medicines here at home.
And not relying on other countries keeps us safe.
But China is racing to overtake us.
Will we let them?
Or will we choose to stay ahead?
And America leads America cures.
Let's tell Washington to keep us in the lead.
Learn how at AmericaCures.com.
The Daily



