Loading...
Loading...

February 2026 was the month that AI's transformation stopped being an insider story and cascaded across groups — from developers embracing a new era of autonomous agents to Wall Street panic-selling stocks in AI's crosshairs to Washington's first real power struggle over who controls the technology. This KPMG-sponsored monthly recap puts a fine point on how the rise of agentic AI, the SaaS apocalypse, and the Anthropic-Pentagon conflict all connect as part of one sweeping shift. In the headlines: the latest on Anthropic versus the US government including Claude's reported use in the Iran strikes, and OpenAI's record $110 billion fundraise.
Want to build with OpenClaw?
LEARN MORE ABOUT CLAW CAMP: https://campclaw.ai/
Or for enterprises, check out: https://enterpriseclaw.ai/
Brought to you by:
KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG’s new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/Navigate
AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - https://www.aiuc-1.com/
Mercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-banking
Rackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpad
Blitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/
Optimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Our Newsletter is BACK: https://aidailybrief.beehiiv.com/
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, the month AI woke up, before that in the headlines, the latest
on Anthropic versus the US government.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, AIUC, Let's See and Scrunch.
To get an ad-free version of the show, go to patreon.com slash AI Daily Brief, or you
can subscribe and have a podcast, to learn about sponsoring the show, or really anything
else about the show, check out AI Daily Brief.ai.
One of the things we've been talking about a lot is our claw camp and our enterprise
claw programs.
Clock camp is an always free self-directed program, enterprise claw is an upcoming paid training
program led by NewFar Gaspar.
Registration is open for that right now, and we'll close at the end of the week you can
find out more at EnterpriseClaw.ai, or again just from AI Daily Brief.ai.
Now with that out of the way, let's catch up with Anthropic.
Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in
around five minutes.
I said over the weekend as we were covering the Anthropic Pentagon story that we were
probably going to have quite a few updates on this one in the weeks to come, and indeed
that is certainly the case.
The conflict between Anthropic and the Pentagon slash White House slash Trump slash
Hegseth took on a new light over the weekend as the US and Israel launched preemptive strikes
on Iran.
Now while some thought that maybe this made that 501PM deadline on Friday not arbitrary,
and instead driven by the Pentagon's need for an approved and operational AI system in
place ahead of the Saturday operation, but as per Wall Street Journal reports, Anthropic's
technology ended up being used in the strikes despite being declared a supply chain risk
hours earlier, sources that the claw was used to analyze intelligence, help select targets,
and carry out battlefield simulations.
To be clear, there are no suggestions that clawed piloted fully autonomous weapons, but
the Pentagon has confirmed that this was the first time that autonomous Lucas Kamikaze
drones were deployed in an active mission.
They're used highlights that autonomous weaponry is part of modern warfare already and
doesn't require the use of frontier LLMs.
Additionally, despite open AI signing a new deal on Friday, that company's models were
not used in the attack.
Katrina Mulligan, open AI's head of national security partnerships, said that that wouldn't
have been possible as the models haven't yet been approved for use in classified settings.
Deal DR, in spite of some of the chatter, it doesn't actually appear that the Pentagon
hot swap AI models on the Friday night before an operation.
As the president said on Friday, there's a six month phase out period where Anthropics
tech will remain in military use.
Still for some, all of this makes the way that it played out even more confusing in
contradictory.
Democrat Congressman Seth Molten wrote, Friday, the Pentagon claims Anthropic is a national
security risk and should be blacklisted.
Saturday, the Pentagon still uses Anthropics clawed during its strikes on Iran.
Either, they use tech that is a net-sec risk during military action or they lied in the
first place.
So that might be what's going on in the actual deployed world of military operations.
But what's happening in the world of consumer sentiment is very different.
Anthropics saw their downloads spike over the weekend, driving clawed to number one on
the app charts, overtaking ChatGPT for the first time.
Claude was outside the top 100 free apps at the end of January and spent most of last
month outside of the top 20, taking advantage of their surge in popularity, Anthropic
promoted the ability to easily migrate memory from ChatGPT for those making the switch.
Now as a side story, this is something that people are paying a lot of attention to.
The signal account on Twitter writes, this is incredibly fascinating because we initially
thought that memory is a moat, but if it is just a file you can take with you.
I suspect people aren't going to do this at scale, but very interesting to see this
play out and stress-tested.
Now to be clear, this isn't some super sophisticated feature.
It's basically a big ol' prompt that Claude gives you that you paste into whatever LLM
you're using and take the results and paste it back to Claude.
And being that it is not going to be perfect, it is not going to have all the context, even
if it gets you started.
In any case, on Saturday, Sam Altman hosted an AMA on X to answer public questions about
their new contract with the Department of War.
One of Almond's big points was that the threat of labeling Anthropic as a play chain
risk is bad for the entire industry.
He wrote, we said to the DOW before and after.
We said that part of the reason we were willing to do this quickly was in the hopes of de-escalation.
I feel competitive with Anthropic for sure, but successfully building safe superintelligence
and widely sharing the benefits is way more important than any company competition.
I believe they would do something to try to help us in the face of great injustice if
they could.
We should all care very much about the precedent.
Open AI later published a blog post containing details of their contract with the Department,
including full text of the sections dealing with AI red lines.
Regarding autonomous weapons, the contract states, the AI systems will not be used to independently
direct autonomous weapons in any case where law, regulation, or department policy requires
human control.
According to Domestic Surveillance, the contract laid out a series of applicable laws and directives
adding, the AI system shall not be used for unconstrained monitoring of U.S. person's
private information as consistent with these authorities.
Now many pointed out that this language does not prevent open AI's technology to be
used for autonomous weapons or domestic surveillance as long as the Pentagon deems that
used to be lawful.
Self-professed AI security hawk Peter Wildeford posted, open AI is trying to claim simultaneously
that A, their contract with the Pentagon allows for all lawful purposes, and B, also that
their red lines are fully protected.
The way open AI bridges this is by saying the protections live in this deployment architecture
and safety stack rather than in the contract language.
But if this contract says all lawful purposes and your safety stack prevents a lawful purpose,
you're in breach of contract.
The Pentagon can just say, we both know your model can do this, you should remove that safeguard,
and then open AI would have to comply or be sued.
Both Sam Altman and Natseck lead Katrina Mulligan responded to this particular point.
Mulligan said,
A lot of the concerns about the government's all lawful use language seem to stem from
mistrust that the government will follow the laws.
At the same time, people believe that anthropic took an important stand by insisting on contract
language around their red lines.
We cannot have it both ways.
We cannot say that the government cannot be trusted to interpret laws and contracts the
right way, but also agree that anthropic policy red lines in a contract would have been
effective.
Setting out open AI's approach she continued, led the democratic process decide on the
legality and proper use question.
Now, somewhat overshadowed by the conflict with the Pentagon, open AI finalized the
largest startup fundraising round in history on Friday morning.
The round ultimately totaled $110 billion, valuing open AI at an $840 billion post-money
valuation.
The valuation positions open AI as the most valuable startup ever, and the 15th most valuable
company in the world.
They are now worth slightly more than JP Morgan Chase.
Notably, the round remains open and open AI expects another 10 billion from financial
entities, including UAE investment fund MGX by the end of March.
The 110 billion is entirely from three corporate strategic partners.
In video and soft bank invested 30 billion each, details were a little scant on this front,
but open AI mentioned the Nvidia strategic partnership includes additional chip supplies,
but the largest investor was Amazon, who put 50 billion into the round.
This investment is split between 15 billion do at the end of March, and a further 35 billion
contingent on open AI going public or hitting unspecified milestones.
YouTube is reporting rumored that these milestones included achieving AGI.
I don't know why these companies keep putting a term as nebulous as AGI as a condition on
their contracts.
It's just going to make lawyers reach later.
Now, overall, the Amazon strategic partnership is wide ranging.
Open AI will expand their server rental deal with AWS from the previously announced 38
billion over seven years to 138 billion over eight years.
As part of the agreement, open AI has also committed to use Amazon's Tranium 3 and
forthcoming Tranium 4 chips.
In AI, and Amazon will also jointly develop AI models to power Amazon's consumer apps.
The Amazon deal also has some interesting implications for Microsoft, who notably did
not make a further investment as part of this round.
Microsoft continues to hold the exclusive right to serve so-called stateless versions
of open AI's models, and the revenue sharing agreement also remains in place, so Microsoft
will take a cut of revenue generated through AWS.
Amazon will be the exclusive provider of open AI's Frontier AI agent management tool,
aside from the first party deployment, however, the open AI branded version of the tool
will be hosted on Azure.
Alongside the fundraising numbers, we also now learned that Chatchee PT has 900 million
weekly active users.
The last reported figure was 800 million in October, and reports suggested that stagnating
user growth had been part of the trigger for Sam Altman's code read in December.
The announcement underscored that subscriber growth is also strong now reaching 50 million.
Rights open AI?
Subscriber momentum accelerated meaningfully to start the year, with January and February
on track to be the largest month of new subscribers in our history.
People use Chatchee PT to learn, write, plan, and build.
As usage scales, the product improves in ways people feel immediately, faster responses,
higher reliability, stronger safety, and more consistent performance.
Open AI also shared that they now have more than 9 million paying business users across
startups, enterprises, and governments, and in addition, weekly codecs users have tripled
since the beginning of the year to reach 1.6 million.
Now that might be the perfect segue to talk about the big changes that ended up characterizing
February, so with that, we will close the headlines and move on to the main episode.
Agenda AI is powering a $3 trillion productivity revolution, and leaders are hitting a real decision
point.
Do you build your own AI agents, buy off the shelf, or borrow by partnering to scale faster?
KPMG's latest thought leadership paper, Agenda AI Untangled Navigating the Build
Buy or Barrow decision, does a great job cutting through the noise with the practical
framework to help you choose based on value, risk, and readiness, and how to scale agents
with the right trust, governance, and orchestration foundation.
Don't lock in the wrong model.
You can download the paper right now at www.kPMG.us slash Navigate, again that's www.kPMG.us slash
Navigate.
There's a new standard that I think is going to matter a lot for the enterprise AI agent
space.
It's called AIUC1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy, security,
safety, reliability, accountability, and societal impact, all verified by a trusted third party.
One of the reasons it's on my radar is that 11 Labs, who you've heard me talk about
before and is just an absolute juggernaut right now, just became the first voice agent
to be certified against AIUC1 and is launching a first of its kind, insurable AI agent.
What that means in practice is real-time guardrails that block unsafe responses and protect
against manipulation plus a full safety stack.
This is the kind of thing that unlocks enterprise adoption.
When a company building on 11 Labs can point to a third party certification and say our
agents are secure, safe, and verified, that changes the conversation.
Go to AIUC.com to learn about the world's first standard for AI agents.
That's AIUC.com.
With the emergence of AI code generation in 2022, Nvidia Master inventor and Harvard
engineer Sid Pureshi took a contrarian stance.
Inference time compute and agent orchestration, not pre-training would be the key to unlocking
high-quality AI-driven software development in the enterprise.
He believed the real breakthrough wasn't in how fast AI could generate code, but in how
deeply it could reason to build enterprise-grade applications.
While the rest of the world focused on co-pilots, he architected something fundamentally different.
Blitzy, the first autonomous software development platform leveraging thousands of agents that
is purpose-built for enterprise-scale code bases.
Fortune 500 leaders are unlocking 5X engineering velocity and delivering months of engineering
work in a matter of days with Blitzy.
Transform the way you develop software.
Discover how at Blitzy.com.
That's B-L-I-T-Z-Y.com.
Quick question.
When was the last time you actually visited a website to research something?
If you're like me, AI pretty much does that work for you now.
That of course raises a new question for brands.
If AI is doing the discovering, researching, and deciding who or what is your website
really for.
That shift in user behavior, the rise of AI bots becoming your most important new visitors,
is what my sponsor scrunch is taking head on.
Scrunch is the AI customer experience platform that helps marketing teams understand how
AI agents experience their site, where they show up in AI answers, where they don't,
and what's preventing them from being retrieved, trusted, or recommended.
And it's not just visibility.
Scrunch shows you the content gaps, citation gaps, and technical blockers that matter, and
helps you fix them so your brand is found and chosen in AI answers.
Now for our listeners scrunch is providing a free website audit that uncovers how AI sees
your site, where there's gaps, and how you're showing up in AI versus the competition.
Run your site through it at scrunch.com slash AI daily.
Welcome back to the AI Daily Brief.
As part of my collaboration with KPMG each month at the beginning of the month, we do a
little bit of a recap of the previous month that preceded it.
For the listeners who are somewhat less regular and don't have time to catch every show, it's
meant to serve as a quick simple recap of the key themes.
Meanwhile for those of you who are here every day, it's to put a fine point on any changes
that the previous month represented.
Despite the incredible amount of attention around it, not every month in AI is huge.
However, February of 2026 was.
This was the month that crystallized for a number of different groups that, to quote
one of the viral pieces from the month, something big is happening.
And in fact, one of the things that made the month so interesting was the extent to which
the broad recognition that something had changed and that something big was indeed happening
was the way that that realization cascaded across all sorts of different groups.
Let's talk about the AI Insiders first.
This is basically the people like you guys, the enfranchised, highly engaged, probably
using vibe coding tools type of AI users who actually pay attention to when new models
launch and when new capabilities they have.
This is the group for whom basically the period from the holiday break at the end of
last year up until now has been a steady realization and embracing of the idea that
the generation of models that came around last November represented something meaningfully
different than those that came before.
The core and first manifestation of this was of course around software engineering and
one of the people who's been in the eye of the storm and communicating what so many
others have felt is former open AI founder Andre Carpathy.
About a week ago he tweeted, it's hard to communicate how much programming has changed
due to AI in the last two months.
Gradually and over time in the progress as usual way, but specifically this last December
he then goes on to explain exactly what happened effectively he says coding agents basically
didn't work before December and they basically do now as he puts it the models have significantly
higher quality long term coherence and tenacity and they can power through large and long tasks
well passed enough that it is extremely disruptive to the default programming workflow.
Programming he writes is becoming unrecognizable the era where you type code into an editor
is done he says and instead we are now in the era of spinning up AI agents telling them
what to do in natural language and then managing their work.
The biggest prize he says is about orchestration how many of these agents can you have going
at once in a way that actually adds up to something real.
He concludes this is nowhere near business as usual time and software.
And I think this does a pretty good job of summarizing what has shifted in short agents
that could actually do work whom you give not a plan but just a goal and let them come
up with the plan are now for many of these most enfranchised users the primary way that
they get value out of AI.
And what's more in February this was given a name and a face and an icon in what was first
named cloud bot for a very short time named molt bot and ultimately finalized as open
claw open claw has been so far the biggest clearest manifestation of the change in autonomy
ambition.
Open claw created a process by which users could give those powerful new generation of models
access to their systems and let them actually do meaningful work on their behalf.
It started with simple personal assistant type things indeed the open claw homepage
still says cleans your inbox sends emails manages your calendar checks you in for your flights
but that is 100% not where it stayed.
Almost immediately people were using open claw for much more extensive and much more ambitious
autonomous or semi autonomous work.
I did a show around mid month about the 10 agent team that I built which included one
developer agent two researchers five project managers one chief of staff and a partridge
in a pear tree.
Because of open claw Mac minis and for some even Mac studios became the hot new visualization
of the new era of AI and again what's super important to point out is that despite open
claw being very meaningfully not for beginners something that indeed requires a ton of technical
work and frankly beating your head against the wall as you sort through just legions of
different problems despite all of that it was not just developers who were excited about
it it was all sorts of different types of people.
I have no better evidence for this than the response to claw camp which is the self directed
program I put together that basically took the process that I had gone through to figure
out how to build both my first agent and then the agent team and turn it into a sequence
that other people could follow.
It is not an easy sequence it takes a lot of time and a lot of hard work and yet nearly
5500 people are doing it right now.
By the end of the month we were starting to see the manifestation of ideas that have
long lurked around the edges of AI as some exciting future potential but which were now
coming to the fore.
So the entrepreneur Ben Cera by himself built a company called pulsia which is an AI for
running autonomous AI companies basically you sign up for pulsia given an idea or just
ask it to surprise you where it'll go do some research and come up with a relevant
idea that seems related to you and then it will build a company around it pulsia gives
it access to everything from get hub to meta ads basically everything that you could
need to run an online business.
The company is up to an annual run rate of over $1.25 million in just a couple of weeks.
So if this whole increase in autonomy ambition was the key theme represented by open claw
wouldn't you think that all the big labs would be racing to catch up with that indeed
you would and indeed that's what happened at the beginning of the month open AI release
the codex app the latest in their push to catch up with and then eventually try to exceed
cloud code and by the middle of the month they made another huge move by hiring the
creator of open claw to build these types of systems inside the context of open AI.
Many thought that the whole situation surrounding open claw was a bit of a bobble for anthropic
given that it had originally been named clawed bot after anthropics clawed which instead
of embracing entropic asked for a name change which is what led to multiple and eventually
open claw.
Still pretty much everyone assumed that the type of features that open claw made available
were likely to come to cloud code pretty soon and sure enough over the last week and
half or so we saw first anthropic release remote control basically away where you can move
from a cloud code session on your computer to managing it from your phone while you're
on the go.
This being of course one of the main appeals of open claw the fact that you get to interact
with it through telegram or WhatsApp or another app on your phone and we also saw anthropic
release schedule tasks inside of not cloud code but co work also over the last week perplexity
announced perplexity computer now they've been working on this for the last couple of months
but it's very much playing in the same spaces open claw in that you give it a wildly ambitious
task to be built and it can just go figure out how to do it Microsoft announced copilot tasks
and we've also heard reports that Microsoft CEO Satya Nadella is actually using open claw and
encouraging his team to check it out and so on and so forth we also got notion custom agents
and basically I think you can assume that this clawification of AI in other words the move to
actual agentic AI is going to continue to proliferate across the industry.
Now as we transition to the next group that woke up it's important to note again that the people
who were cloud coding and open clawing weren't just the devs in preparation for a segment about
these new tools CNBC's dear tribosa went to try to build her own version of Monday.com with
cloud co work just to understand and share with her audience what's actually possible.
She figured it won't work but it would be a good way to show people the current state of the
technology. An hour later she writes I literally have my own Monday.com that's plugged into my
calendar in Gmail and surfaced to kids B day that was not anywhere on my radar and I need to get a
gift for finance and markets content creator Joe Wyzenthal was actually a couple weeks ahead of
everyone else starting to play around with cloud code in a big way back in January and in many
ways proceeding the realization that the rest of Wall Street would have coming into February
because if one group that woke up in February was the AI insiders the other group was Wall Street.
February was the month of the SaaS apocalypse and it actually started off at the end of January
when after Google shared the demo version of Genie 3 where you could create 60 second immersive
worlds a bunch of gaming industry publisher stocks fell but that would be just the very beginning.
But the big actual story of the SaaS apocalypse would end up being that basically every time
inthropic announced some new plugin for cloud code or co work a set of stocks that were
somewhere between directly and nominally related to that plugins focus would just absolutely
crater on February 10th Bloomberg wrote that Wall Street's new hot trade was dumping stocks that
were in AI's crosshairs and it was not just one category we saw this in games we saw this in
productivity software we saw it in finance we saw it in legal on February 10th the Wall Street
journal went so far as to call Wall Street hot new trade dumping stocks that in their words were
in AI's crosshairs and remember this is not just one category this is games legal software general
productivity software IBM seeing their worst single day drop in 25 years because anthropic
wrote a blog about its cobalt tool which had been announced months earlier all of this was the
perfect caustic environment for citrini research to drop their highly viral piece called the 2028
global intelligence crisis which basically articulated a theoretical doom loop scenario that led to
utter economic catastrophe despite that report producing a lot of good counter conversation as well
when in the middle of last week block announced that it was cutting 4,000 employees about 40%
of its overall staff many pointed to it as evidence of the exact sort of white collar carnage that
the citrini report was discussing now there has of course been a lot of debate about the extent to
which it might be the biggest case of AI washing we've seen so far but this is where the environment
is heading out of February and into March Wall Street is extremely jumpy when it comes to AI
and this time it's not because the size or circularity of infrastructure deals but because AI
might be too good then of course there's Washington not only was February the month Washington woke
up to AI but that the rest of us woke up to the complication of the relationship between Washington
and Silicon Valley when it comes to AI I just did an extended episode about this and we talked
about the latest in the headlines but of course the TLDR is that through a series of steps seemingly
going back to the Nicholas Maduro Venezuela raid there was a negotiation where anthropic wanted
specific red line carveouts around AI being used for autonomous weapons and for domestic master
valence with the White House instead wanting the standard to be any lawful uses now of course this
disagreement wasn't just about these specific uses it was much more about who gets to determine
for what and how AI is used it was the first manifestation of what was always an inevitable power
struggle even if it happened in a very ugly way indeed before the fight took its most dramatic turn
already members of congress like Tom Tillis were pretty disgusted around how the whole thing was
happening till it said why the hell are we having this discussion in public why isn't this
occurring in a boardroom or in the secretary's office I mean this is soft morik it of course only
got more so as president Trump and defense secretary Pete Hegseth announced that not only with the
US government not be working with anthropic but that they were going to be designating them a
supply chain risk arguing that that meant that other contractors of the US government would also
have to drop their relationships with anthropic which if it came to pass would have some pretty
serious and dramatic implications now this particular manifestation of the battle itself isn't even
done yet and again it is just the first in what will be a much bigger power struggle in the years
to come so those were the big things AI insiders woke up and changed their level of autonomy ambition
broader white collar workers got dragged along and started to use things like cloud code cloud
co-work and even open claw this spilled over into a recognition on wall street that as match
Schumer's post put it something big was happening which led to lots of chaos in the markets
and to cap it all off we have of course the absolute bun fight that is anthropic versus the
Pentagon in terms of other key details a couple things that are worth noting especially as we
keep an eye on what we expect in March first of all while we didn't get the much anticipated
deep seek for we did get seed dance 2.0 from bite dance which is a video generation model that had
many people in the AI industry asking whether it was the first example of China open-weight models
not only catching up with the US but actually exceeding it the big one to watch for coming up in
March is of course that deep seek event which people have been predicting as coming next week for
basically every week for the last four weeks anthropic added son at 4.6 to opus 4.6 making a
complete 4.6 suite and Google drop Gemini 3.1 pro one interesting note about Google's releases
is that they're very clearly flexing the opportunities around multimodal although exactly
how that's going to hit and what it's going to matter for especially as everyone is just talking
about code generation remains to be seen capping the month and models off was also another
Google model nano banana 2 which was actually more of a functional upgrade than anything else
making nano banana much faster and cheaper although also improving things like text handling
and text reasoning one final story from the month that I think does a good job of capturing
when meter finally shared where codex 5.3 and opus 4.6 were on their long horizon task study
both were high but opus 4.6 especially was basically off the charts at this point in other words
we are in uncharted territory where even the metric that became one of the most if not the most
important metric in some ways of showing a progress last year just can't keep up any longer
and with march now here we could be heading for something else huge my x-slash twitter today
is filled with rumors of gpt not 5.3 but 5.4 and a lot of breathless discussion about
how much better it is we'll see if that's actually true but no matter what 2026 is off to a
rollicking start and that is going to do it for today's ai daily brief i appreciate you listening
or watching as always thanks to kpmg for sponsoring the monthly recap and until next time peace
The AI Daily Brief: Artificial Intelligence News and Analysis
