Loading...
Loading...

AI has crossed the line from tech story to political battleground as the Anthropic–Pentagon dispute, Dario Amodei’s leaked memo attacking OpenAI and the Trump administration, and threats to label Anthropic a “supply chain risk” pull frontier AI companies directly into geopolitics and culture wars. The fight exposes deeper tensions around military AI, surveillance, industry unity, and what happens when AI companies start operating like strategic infrastructure. In the headlines: Jensen Huang calls Open Claw the most important software release ever, OpenAI reportedly passes $25B ARR as the revenue race heats up with Anthropic, and Google’s NotebookLM adds cinematic AI-generated video reports.
Want to build with OpenClaw?
LEARN MORE ABOUT CLAW CAMP: https://campclaw.ai/
Or for enterprises, check out: https://enterpriseclaw.ai/
Brought to you by:
KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG’s new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/Navigate
Mercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-banking
AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - https://www.aiuc-1.com/
Rackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpad
Blitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/
Optimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Our Newsletter is BACK: https://aidailybrief.beehiiv.com/
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, AI is officially political, and how.
Before that in the headlines, he's open-clawed the most important software release ever,
the AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors Recall AI, Robots and Pencils, AIUC, and
Blitzy.
To get an ad-free version of the show, go to patreon.com, such AI Daily Brief.
If you are interested in sponsoring the show, or really anything else in the AIDB ecosystem,
head on over to aidelebrief.ai.
While you are there, two things that I want to call your attention to first.
Last day to do our February pulse survey, appreciate everyone who has done that.
This will just take a couple of minutes and it helps us track AI usage and give people
data about what's actually going on and what's trending and what's changing.
And if you contribute, you get that data before anyone else.
And the other last, it's the last day to sign up for this first edition of Enterprise
Claw.
You can find that at enterpriseclaw.ai.
With that, let's go over to the headlines and some big words from Jensen Huang.
We give up today with a fun little quote from Nvidia CEO Jensen Huang at the Morgan Stanley
TMT conference from Wednesday.
Jensen absolutely waxed about OpenClaw, saying, OpenClaw is probably the single most
important release of software probably ever.
Linux took some 30 years to reach this level.
OpenClaw in what is it three weeks has now surpassed Linux.
It is now the single most downloaded opens our software in history.
Now what he's specifically referring to is not the idea that overall OpenClaw has more
downloads than Linux or Facebook's React library.
When it's referring to is this chart that's flying around which is true of the GitHub
Star history of these projects, where OpenClaw is officially ahead of those vented projects
in GitHub stars and has done so extremely quickly.
Now hold aside the specific details.
The context really matters here.
OpenClaw is a phenomenon that has fundamentally changed how people think about what AI can
do.
It has been ground zero in ushering in the true agent era and one of the more consequential
parts of Jensen's comments is that they came at a Wall Street conference, clearly signaling
that personal agents are a big deal and that investors need to get up to speed.
This shift in AI is also aligned with Huang's predictions about where the industry is going.
More than a year, Huang has been conceptualizing AI tokens as the new fundamental unit of
work in GDP.
During his talk, Huang updated this thesis and claimed the so-called token economy is coming
into focus.
Jensen also discussed Nvidia's recent $30 billion investment in OpenAI, specifically
in the context of it not being the $100 billion deal that was rumored to be in the works
last year.
He said, I think the opportunity to invest $100 billion in OpenAI is probably not in the
cards.
Not because Nvidia has gotten any less bullish on the company, but because Jensen's
base case is that they IPO by the end of the year, meaning in his words, this might
be the last time we'll have the opportunity to invest in a consequential company like
this.
Huang added that Nvidia's $10 billion investment in Anthropic late last year was also probably
their last, which isn't to say that Nvidia won't continue to benefit from the success of
those companies.
For example, Jensen commented that Amazon's gigantic compute partnership with OpenAI means
that Nvidia is, quote, ramping AWS-like mad.
Now OpenClaw is not just a US phenomenon.
In fact, the information recently reported on the many ways OpenClaw is changing what Chinese
founders are building.
They highlighted a recent OpenClaw hackathon in China, where one contestant made Tinder
for AI agents, basically where OpenClaws can find love interests for their humans still.
Another created an automated recruiting site where OpenClaws owned by job seekers and
companies interview each other.
There was also a gamified social media and travel platform that hosts content created
by OpenClaws.
Felix Tao, the co-founder of Mindverse AI, said,
every founder I know is now working on new projects to test the boundaries of what personal
AI agents can do.
One of the interesting differences in the Chinese tech scene is the large companies diving
straight into the new agent trend.
White dance, Alibaba, and Tencent are now all offering hosted OpenClaw instances to customers.
Something that none of the Western cloud giants have done so far.
Kimi Creator, Moonshot, and Mini Max are also offering cloud-based versions of OpenClaw
within their proprietary apps as a way to draw in new users.
The article also mentioned numerous startups and founders working on OpenClaw projects,
either building features on top of OpenClaw or spinning up competitors in the personal
agent space.
Qvera's co-founder Dong ShiQu said,
tech entrepreneurs in China responded immediately to OpenClaw and launch new projects because
they knew all of their competitors would be doing the same.
Nobody wants to be left behind.
Parker Lyman, a man, has even tweeted,
this is how competitive it is in China.
OpenClaw and scholars have started offering two hours of house cleaning as part of the
package in order to win clients.
They'll even list any items you want to declutter on a second-hand marketplace, all for
$57.
Right, Lenny Richitsky of Lenny's podcast, I don't think enough people are appreciating
how insane this is.
Over 80 OpenClaw meet-up scheduled around the world and more popping up every day, for
a product less than a few months old, I've never seen anything like this.
Something very special is happening.
Now, moving over to the numbers game, just one day after Anthropics revenue numbers
were leaked to the press, OpenAI struck back and leaked a larger number.
In Tuesday, Bloomberg reported that Anthropic had surpassed 19 billion in ARR, more than
doubling their run rates since the end of last year.
That put them within striking distance of OpenAI, who told investors they had closed 2025
with more than 20 billion in ARR.
Now, as soon as I heard that Anthropic was officially at basic parity with the last
number that we got from OpenAI, I just knew that we were somehow from some leak or going
to get new OpenAI numbers.
And sure enough, late last night, the information reported that OpenAI now has exceeded 25
billion in ARR.
They also firmed up their 2025 estimate, claiming they actually ended the year with 21.4
billion.
That makes that a 17% jump over the first two months of 2026, which if it were not for
Anthropic staggering 36% gain in the last couple of weeks, we'd be talking about with
just as much slack in our jaws.
Sources added that OpenAI's ARR calculation was based on revenue average over the past
four weeks, but if they extrapolated just the past week, ARR would be even higher at 30
billion.
And that's what I'm talking about.
AI might still be an industrial bubble because almost every big tech is a bubble of some
kind, and the revenue has a long way to catch up to CapEx, but the idea that this industry
has no business model is a take aging like a rotted banana.
Lastly, today, something which I am absolutely going to come back to and do more of an operator's
focused episode on at some point, notebook LM can now create fully animated videos to
accompany reports.
Google is calling these cinematic video overviews, and the results are pretty impressive.
The demo showed a brief clip of a video overview about mathematical limits using images
in video with some very cool space theme visualizations.
Now we did previously have video overviews, but up until now they'd just been slideshows.
They were already a useful extension of audio overviews, but there wasn't as much of a
let's say wow factor.
The new cinematic video overviews are immediately more striking, and pretty much guaranteed to
make people wonder how they were made.
Specifically, they feel more like a native video presentation with custom animations and
images, rather than a simple slideshow leveraging stock images.
Robert Scobo presented an even more impressive example, sharing a video based on summarizing
AI chatter on X over the past few days.
The video opens up on an animation of a DaVinci style contraption, as the voiceover discusses
how AI discourse has moved on from chatbots to discuss infrastructure agents and politics.
The video flips through various generated images in a matching style, making the entire
presentation feel like a coherent whole.
It also draws on real photos where relevant.
Scobo said that he analyzed tweets and generated the script externally, but the rest of it
was straight from notebook LM, which also generated an audio podcast and a mind map.
Now one of the things that we've talked about numerous times this year is how much Google's
product strategy I think is about flexing their lead in multimodal AI, and one could argue
that this is one of the bigger flexes to date, especially if you factor it for actual
immediate term relevance for real people and real workers.
Like video overviews orchestrate the Gemini 3 family of models, nano bin and a pro, and
VO to weave together voiceover images and video in a way that just feels like the beginnings
at least of a professional video production.
What's more, this is not your grandpa's 10 second video clip.
Scobo's video, for example, runs for almost 5 minutes.
Describing the new tech Google wrote, Gemini now acts as a creative director, making hundreds
of structural and stylistic decisions to tell the best story with your sources.
It determines the best narrative, visual style, and format, and even refines its own work
to ensure consistency.
Now at this stage, the only downside is that the feature is exclusive to the top tier
ultra subscription, making me once again grateful that my job justifies holding one of those
types of subscriptions for all the major players.
Very, very cool stuff from Google, something I'm very excited to play around with more.
For now, however, that's going to do it for the headlines.
Next up, the main episode.
Why is there always a meeting bot in your Zoom call?
Blame Recall.ai.
Recall.ai powers the meeting bots and desktop recording apps behind products like Clouli, Hub
Spot, and ClickUp.
They handle the hard infrastructure work, capturing clean recordings, transcripts, and metadata
across Zoom, Google Meet, Microsoft Teams, in-person meetings, and more, so developers don't
have to build it themselves.
If you're building a meeting notetaker or anything involving conversational data, Recall.ai
is the API for meeting recording.
It started today with $100 in free credits at Recall.ai-slash-aidb, that's Recall.ai-slash-aidb.
Most companies don't struggle with ideas.
They struggle with turning them into real AI systems that deliver value.
Robots and pencils is a company built to close that gap.
They design and deliver intelligent, cloud-native systems powered by generative and agentic AI,
with focus, speed, and clear outcomes.
Robots and pencils work in small, high-impact pods.
Robots, strategist, designers, and applied AI specialists working together to move from
idea to production without unnecessary friction.
Powered by RoboWorks, their agentic acceleration platform, Teams deliver meaningful results
including initial launches in as little as 45 days depending on scope.
If your organization is ready to move faster, reduce complexity, and turn AI ambition into
real results, Robots and pencils is built for that moment.
Start the conversation at robotsandpensils.com-aidb brief.
That's robotsandpensils.com-aidb brief.
Robots and pencils.
Impact at Velocity.
There's a new standard that I think is going to matter a lot for the Enterprise AI Agents
space.
It's called AIUC1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy, security,
safety, reliability, accountability, and societal impact, all verified by a trusted third party.
One of the reasons it's on my radar is that 11 Labs, who you've heard me talk about
before and is just an absolute juggernaut right now, just became the first voice agent
to be certified against AIUC1 and is launching a first-of-its-kind, insurable AI agent.
What that means in practice is real-time guardrails that block unsafe responses and protect
against manipulation plus a full safety stack.
This is the kind of thing that unlocks enterprise adoption.
When a company building on 11 Labs can point to a third party certification and say our
agents are secure, safe, and verified, that changes the conversation.
Go to AIUC.com to learn about the world's first standard for AI Agents.
That's AIUC.com.
When you accelerate enterprise software development velocity by 5x, you need Blitzy, the
only autonomous software development platform built for enterprise code bases.
Your engineers define the project, a new feature, refactor, or Greenfield build.
Blitzy Agents first ingest and map your entire code base, then the platform generates a
bespoke agent action plan for your team to review and approve.
Once approved, Blitzy gets to work autonomously generating hundreds of thousands of lines
of validated and untested code.
More than 80% of the work completed in a single run.
Blitzy is not generating code, it's developing software at the speed of compute.
Your engineers review, refine, and ship.
This is how Fortune 500 companies are compressing multi-month projects into a single sprint.
Accelerating engineering velocity by 5x.
Experience Blitzy firsthand at Blitzy.com, that's B-L-I-T-Z-Y.com.
Welcome back to the AI Daily Brief.
When it comes to what I cover on this show, I have a strong preference, as you guys
well know at this point, to changes and updates that are directly and immediately relevant
to you and your lives and your work, and yet, of course, all of those changes are happening
in a larger societal context that we can't ignore, and right now we are in a particularly
notable moment in the history of the politics of AI, which I would describe as something
like if AI has flirted with politics so far, it is now through this phase becoming much
more discreetly and distinctly a political issue.
The verge goes even farther, writing in a recent piece that AI is now part of the culture
wars.
And with a recent memo from Anthropics CEO Dario Amade, the culture warners of this conversation
is likely to get worse, not better.
I'm sure at this point you've been keeping up to speed with the Anthropic Pentagon
Bun Fight, but the quick-teal of the R is that Anthropic had a couple of red lines around
domestic surveillance and autonomous weapons that they refused to change in their contract,
which really ticked off Defense Secretary Pete Hegseth, which led to all sorts of threats
of the U.S. government designating Anthropic as a supply chain risk, which is not something
that the U.S. government has historically done for American companies, which led to memos
and much public fighting last week, finally culminating in President Trump, blasting
out on truth social that Anthropic was now persona non grata with the U.S. government,
and Hegseth following up that not only would they not be working with Anthropic, they
would in fact be pursuing the supply chain risk designation and pushing other defense
contractors to stop working with Anthropic as well.
On the same day that this was all going down, OpenAI announced their own deal with the
Department of War, and it has just been a mess.
In the wake of OpenAI announcing their deal last Friday night, Anthropic CEO Dario Amadei
published a 1600-word memo that was not happy with basically anyone.
The memo was later leaked to the information, and Amadei got right to the point.
He opened the memo by writing, I want to be very clear on the messaging that is coming
from OpenAI and the mendacious nature of it.
This is an example of who they really are, and I want to make sure everyone sees it
for what it is.
Dario explained that while we didn't know exactly what was in the OpenAI contract, he had
a few impressions about how their safeguards would work, he suggested that OpenAI would
deploy a model without legal restrictions, but with a safety layer that amounts to model
refusals on certain tasks.
Amadei continued, our general sense is that these kinds of approaches, while they don't
have zero efficacy, are in the context of military applications, maybe 20% real and 80%
safety theater.
He explained that applications like Autonomous Weaponry or Domestic Surveillance rely on
context that the model can be privy to, such as the presence of a human in the loop or
the provenance of surveillance data.
Amadei also alleged that the idea that anthropic were offered the same terms as OpenAI and rejected
them was false.
He added that he also believed it was false that OpenAI's terms meaningfully prevent
AI use in domestic mass surveillance or Autonomous Weaponry.
Circling back to earlier statements, Dario reiterated the core concern that the DOW has legal
surveillance powers which are, quote, not of great concern in the pre-AI world, but take
on a different meaning in a post-AI world.
Amadei wrote that anthropics negotiations on Friday had ultimately come down to a single
clause in the contract.
According to his retelling of events, the Pentagon had agreed to everything anthropic
had asked for, but required them to delete the specific phrase about analysis of bulk
acquired data.
He said this exactly match the scenario we were most worried about, we found that very
suspicious.
On Autonomous Weapons, Amadei said the Pentagon had argued that a human in the loop
is required under the law, but Dario noted that this is only Pentagon policy, which was
added during the Biden administration, and could be changed at will by Secretary Hegseth
adding, so it is not for all intents and purposes a real constraint.
Still, a lot of the details of the negotiations were kind of secondary to the main point he
was trying to make.
Specifically, he said that a lot of the messaging from OpenAI and DOW are, quote, just straight
up lies about these issues or tries to confuse them.
In pretty much no uncertain terms, he accused Sam Altman of acting in bad faith, suggesting
that all of his appearances to support anthropic in public were just about him acting in a way
that, quote, doesn't make it seem like he gave up on the red lines and sold out when
we wouldn't.
In the spiciest and perhaps most politically fraught part of the memo, Dario argued that
the disagreement didn't actually have to do with the contract.
He wrote,
The real reasons DOW and the Trump and Men do not like us is that we haven't donated
to Trump while OpenAI and Greg Brockman have donated a lot.
We haven't given dictator style praise to Trump while Sam has.
We have supported AI regulation, which is against their agenda.
We've told the truth about a number of AI policy issues like job displacement.
And we've actually held our red lines with integrity rather than colluding with them to
produce safety theater for the benefit of employees.
Which I absolutely swear to you is what literally everyone at the DOW, Palantir, our political
consultants, etc, assumed was the problem we were trying to solve.
Sam is now with the help of the DOW Dario continues, trying to spin this as if we were unreasonable.
We didn't engage in a good way, we were less flexible, etc.
I want people to recognize this as the gaslighting it is.
Coming to a conclusion, Dario writes,
Thus Sam is trying to undermine our position while appearing to support it.
I want people to be really clear on this.
He's trying to make it more possible for the admin to punish us by undercutting our
public support.
Finally, I suspect he is even egging them on, though I have no direct evidence for this
last thing.
Dario argued that the narrative was mostly failing with the general public, but had been
successful with some in his words Twitter morons.
My main worry he concludes is how to make sure it doesn't work on open AI employees.
Due to selection effects, they're sort of a gullible bunch, but it seems important to
push back on these narratives which Sam is peddling to his employees.
So boy howdy, lots of unpacking this.
I think it's important to keep in mind that this was Friday night right as this was all
going down.
And I think that there are a couple possible interpretations.
One is that this was some type of strategic, either a strategic recruitment play, in other
words to get disaffected open AI staffers to come over and join Anthropic, or an attempt
to lean into anti administration sentiment, basically an act of App Store politics.
Anthropic at the time of Dario's writing had not yet hit number one in the App Store
charts, but already it had rocketed up to number two.
The other possible interpretation though, of course, is effectively that this was just
a crash out, that it wasn't super considered, and that any of these strategic outcomes were
just secondary to the fact that it was a CEO venting in a sort of private forum that
them would become public.
This seems to be which V mouse which thinks writing Dario was obviously on mega tilt here
Sam is everyone else on Friday.
And the inflammatory stuff, especially about the White House is deeply effing stupid to
say White House was trying to deescalate and Dario needs to eat some crow ASAP.
Now his V here is genuinely sympathetic to Anthropic and AI safety in general.
And so I think it's notable that that interpretation is coming from him.
Unsurprisingly, it does seem that the administration was not happy about this.
Axios business editor Dan Primack wrote, Amadei's blog post is said to have infuriated defense
department officials who believe he was trying to virtue signal to a Anthropic employees
upset about the Venezuela revelations and be AI engineers at rival companies who might
share similar concerns.
Now I'm pretty sure Dan was talking about a previous memo not this most recent one,
but implying that the same logic from the previous memo applies to the Friday night writing
as well.
It is worth noting as we interpret things, the Dario has never been a big fan of Trump.
A news article from last September reported that in a Facebook post urging friends to
vote for Kamala Harris, Amadei had likened Trump to a feudal warlord.
He also cut ties to a number of law firms who had made deals with the president.
While pretty much everyone agreed that this was not going to work out all that well for
Anthropic vis-a-vis the White House, even if they generally supported Dario's position,
there was more mixed feelings around his accusations with regard to Sam and Open AI.
Dean Ball wrote,
I do not share the cynicism of some with respect to Open AI's actions in the DAW Anthropic
dispute.
It basically seems to me as though Open AI was attempting to de-escalate last week, whether
they executed well as a separate question, but in their defense, good execution in such
chaos was nearly impossible.
It seems Open AI tried to reduce tensions and find a productive path forward, while allowing
its employees considerable latitude to speak their minds.
The easy thing would have been for management to stay quiet and let this happen.
They did not do that, and they also stood firm in opposition to the supply chain risk
designation.
In general, Open AI is unjustly maligned.
This is the thing that bothers me the most about Dario's leaked memo.
It spends so much time on Open AI conspiracies and cynicism that I fear industry solidarity
in the future will be harder than it needs to be.
This is not the last time we will see state interference into Frontier AI, and until we
build formalized structures for such interference, it will be important for the industry to hang
tough together.
I fear that will be less likely now.
Interestingly, Sam Altman seems to agree with Dean that the particulars of how they
handled the Pentagon contract weren't handled as well as they might have been.
During his first All Hands dealing with the issue, Altman said that he didn't regret signing
the deal, but wished he didn't rush to announce it last Friday night.
Echoing previous comments, he said the announcement made Open AI look opportunistic and not united
with the field.
Sources that the tones of the All Hands meeting was respectful with employees trying to drill
down on the details in the contract.
Altman apparently empathized with the mood in the room, saying, to try so hard to do the
right thing and get so absolutely personally crushed for it, and I know this is happening
to all of you too, so I feel terrible for subjecting you to this, is really painful.
A source speaking with the New York Post said that the reaction within the company was largely
positive, saved for a small group.
They said from the internal messages, people are pragmatic and agree that Friday night
was perhaps a little rushed to not the best communication, but now that there is more
information, it feels like everybody is generally positive, saved for like these 30 people
who are always the ones raising questions.
And while no one has publicly quit over the contract, reinforcement learning lead Mac Schwartz
or announced on Monday that he had decided to leave Open AI to join Anthropic, which basically
everyone assumed was a direct response to this.
That said, not only did Schwartz or not throw Open AI under the bus, he tried to give at
least a plausible reason for his move that wasn't this, saying that he wanted to return to
doing individual work as a researcher, rather than continuing in a management position.
On Wednesday evening, the Financial Times reported that Anthropic had restarted negotiations
with the Pentagon around the contract.
Wednesday was reportedly back in discussions with the Department of War under Secretary
for Research and Engineering and former Uber Executive at Meal Michael.
You might remember him as the person who referred to Amade as a liar with a God complex
just about a week ago.
The reporting frame the talks as a last ditch effort to strike a deal and avoid being
labeled a supply chain risk.
And while they said that the memo was likely to complicate negotiations,
they did not include any sourcing about the administration's current outlook on it.
Axios, however, did receive comment from the administration, which threw cold water on the
prospect of a reconciliation.
At administration officials said,
ultimately this is about our warfighters having the best tools to win a fight,
and you can't trust Claude isn't secretly carrying out Dario's agenda in a classified setting.
What's more, even before a formal supply chain risk designation,
military contractors are already ripping out Anthropics tech.
CNBC reports that a number of defense contractors are telling employees to stop using
Claude and switch to other models.
The reporting directly references the threat to label Anthropic supply chain risk as the cause.
While opinions have been pretty unified that the designation goes way too far,
including even from central figures at OpenAI.
For example, on Monday, former NSA and CyberCommand Director and now OpenAI
board member Paul Nakasone said,
this is not a good space for our nation.
We need Anthropic. We need OpenAI.
We need all of our large language model companies to be partnering with our government.
The moves of the defense contractors show why these types of threats are so pernicious.
No one who has mission critical and business essential contracts with the US government is going
to take those risks. Alexander Hartstrick of J2 Ventures, which has a focus in the defense space,
said that already 10 of his firm's portfolio companies have, quote,
backed off of their use of Claude for defense use cases and are an active
processes to replace the service with another one.
Now while this is undoubtedly the largest AI politics issue,
and one that is thrusting it into the mainstream,
it has political coattails that are dragging other things in as well.
As elections get closer, the conversation around data centers, for example,
is getting more heated as well. This week, the president finalized the big tech pledge on data
center energy use. The pledge was signed on Wednesday at a White House Roundtable with several
tech executives in attendance. Attendees included Microsoft, President Brad Smith,
and OpenAI COO, Brad Lightcap. Anthropic, of course, was not represented, but they also
haven't begun building their own data centers. Seven companies signed the pledge,
namely Google, Meta, Microsoft, Oracle, OpenAI, Amazon, and XAI.
So this covers all of the hyperscalers as well as each AI startup currently building
significant AI infrastructure. Substantively, the tech companies have pledged to bring their own
power supply, either through constructing new power plants or paying to cover the cost of
expanded infrastructure. The pledge doesn't prescribe any particular solution,
but the president said that each company should negotiate directly with utilities to ensure
they're paying an appropriate rate. The agreement states that the tech companies will be on the
hook for additional costs even if they pull out of data center projects. That was presented as a
key term that could have swaged fears of overbuilding into an AI burst with consumers left holding
the bag. In addition, the company signed up to contribute power back to local grids in times of
need. These load management agreements have been in place in Texas for several years, and have
proven fairly successful at keeping the grid operational during winter storms. The pledge is
structured as an agreement with the president, so it's unclear if it carries any legal weight,
but the president pointed out that this pledge is in the best interest of the hyperscalers.
Articulating quite simply the obvious political truth, Trump said,
they need some PR help because people think that if a data center goes in,
their electricity prices are going to go up. Some centers were rejected by communities for that,
and now I think it's going to be the opposite. AI's are David Sachs, took to Twitter to
law the deal, and critique opposing types of data center policies. Sachs wrote,
this is a much better approach to affordability than Bernie Sanders total ban on new data centers,
which would halt the construction boom currently driving wage growth and job growth for
blue collar workers. In fact, the rate-payer protection pledge will lower electricity
prices when AI companies pay for grid upgrades and sell their excess power back to the grid.
The right approach to data centers is not to stop progress altogether,
but rather to protect residential ratepayers from price increases,
while making it easier to stand up new power generation.
Speaking of Bernie Sanders, it's very clear that he thinks this is a winning political issue,
and one that he's very much not going to let go. He put out a video of him flying to Berkeley,
speaking with some of the more prominent AI dooms like Eleazar Yutkowski,
and then releasing the video to his Twitter. Jeff Schellenberg of Compact Magazine
is unsure that this is the right strategy for AI criticism. Jeff writes,
the economic populist view of AI is, or should be, quite different from the Yutkowski and Doomer
view. However, because the latter is more narratively compelling and urgent seeming,
economic populists seem to be embracing it. This is unfortunate.
Finally, showing just what absolutely weird bedfellows AI issues are going to bring together.
Future of Life Institute and AI Safety as Max Tagmark announced the pro-human AI declaration.
The verge reports that a secret meeting took place back in January to sign this document,
and the group of people represented in the 90 attendees are to say the very least scattered
across the political spectrum. The group who've signed this thing include everyone from
MAGA influencer and former presidential advisor Steve Bannon to Ralph Nader.
If you want to know more broadly what I think about the anti-AI movement and which parts of it
we should be paying attention to and how we should be engaging, I have a whole episode of that last
week. For now in the purpose of this episode, the big thing that I want to track and where we'll
conclude is that part of the fallout of the Anthropic and Pentagon fight is that something which
has remained mostly on the sideline so far as a political issue is now being absolutely thrust
into the mainstream. Hopefully pretty soon we can get a reprieve from this. In any case,
I'll probably try to dial back the coverage unless something truly huge happens,
but that is where things stand from where I'm sitting. And that is going to do it for the AI
daily brief. Thanks as always for listening or watching, and until next time, peace!
The AI Daily Brief: Artificial Intelligence News and Analysis
