Loading...
Loading...

All these comments from the government, despite the bluster, despite the back and forth,
despite the threads, they are just admitting they can't live without God, and frankly,
I agree with them on that because you'd have to pry a cloth from my cold, bad hands if you wanted
to take it away. Welcome to the Artificial Intelligence Show, the podcast that helps your business
grow smarter by making AI approachable and actionable. My name is Paul Ratesor. I'm the founder and CEO
of SmarterX and Marketing AI Institute, and I'm your host. Each week I'm joined by my co-host
and SmarterX Chief Content Officer, Mike Kapoor. As we break down all the AI news that matters
and give you insights and perspectives that you can use to advance your company and your career.
Join us as we accelerate AI literacy for all.
Welcome to Episode 200 of the Artificial Intelligence Show. I'm your host, Paul Ratesor,
along with my co-host, Mike Kapoor. We are coming to you live on, well, we're recording this live
on Monday, March 2nd. It is 9.30am Eastern time. I will often timestamp the beginning of these.
This week may be more important than others, because our first main topic today is going to be
all about Anthropic and their battle with the Department of War over the use of Clawed
for military applications and monitoring of US citizens. It is a wild topic. It has been a fast
moving fluid topic all throughout the weekend. It's very possible that as we're recording this,
some things are going to be happening. The unique, very unique thing today about Episode 200
is this is the first time we are recording one of these with a live audience. We have invited our
AI Academy Mastery members to join us for this recording. Normally, it's just me and Mike
hanging out doing our thing. Today, we are doing this through a Zoom webinar, so we actually have
Mastery members joining us. There is a chat. I am seeing out of the corner of my eye going. We are
going to take questions from Mastery members at the end. We are grateful to all of our AI
Mastery members and everybody who's joining us today from around the world. This is kind of a
cool way to celebrate. We were trying to think what could we do for Episode 200. That would be
fun and unique and we came up with the idea of let's just invite some people to be here while we're
doing it. Mike and I are going to do our usual thing. We just happen to have some friends with us
today to listen in and then we're going to take some questions from them afterwards. Anything
else Mike on that and before I run us through the presented by and then dive into this.
Now I can't just business as usual. Yeah, business as usual.
All right. So today's episode is brought to us by AI Academy by SmartRx, which is where our
Mastery members are coming from. AI Academy helps individuals and businesses accelerate their AI
literacy and transformation through personalized learning journeys and an AI-powered learning
platform. There are 13 professional certificate course series available on demand now. With more
being added each month, as a matter of fact, we just added two more. We added AI for financial
services for industry and AI for finance for departments. So we have AI department collection
and an AI for industry's collection. And so those two new course series with certificates just
went live in the last two weeks. So those are great. Go check them out academy.smartarex.ai. You
can learn all about Academy and the AI Mastery membership program. Okay. If you're new to the podcast,
we start each week with a recap of the previous week's AI pulse survey. So we ask these questions.
You can go to smartarex.ai forward slash pulse and participate in the survey each week. These are
informal polls of our listeners of how they're feeling about topics we talk about on the podcast.
So Mike, we asked last week, Microsoft AI CEO Mustafa Solomon says most white collar tasks will be
fully automated by AI within 12 to 18 months. How realistic do you find that timeline? So it looks
like 53% said partially realistic. Some tasks will be most not most. It's interesting. 33% said
two aggressive meaningful automation in three is three to five years out. And then 9% said very
realistic. I'm already seeing it in my work. And I think Mike, what I said at the time we talk about
that was I think the tech will be able to do that. I just don't think human friction and organizational
resistance to change will allow it to happen. That was kind of my general take on that.
And then the second was where do you land on AI generated video using real people's likeness
in the video? It generates 58% said the tech is impressive, but using real people without consent
crosses a line. 22% said it's inevitable and the law needs to catch up. And then 20% said it's
mostly a misinformation risk. That's what concerns me. I don't see anybody that said it's creative
experimentation and not a big deal. Yeah, perfect. All right. So we will when we wrap up today,
we'll give you this week's pulse, but again, you can go to smarterx.ai forward slash pulse and
participate in the surveys. All right. So yeah, what a 72 hours Mike. No kidding. So when we were
preparing for episode 200, Wednesday things started going sideways with the US government and
anthropic and Mike's going to break down sort of the chain of events over the three days leading
into the weekend. And then we'll talk about kind of where we've landed as of Monday. We went
into the weekend thinking that main topics one, two, and three were all going to be related to
anthropic versus the US government. And then we decided Sunday to sort of consolidate this. So I
will just give you a warning. This is going to be a bit more extensive than the average main
topic. I would say Mike is that fair to say I would guess. Okay. So we're going to do our best to
provide context. I will say that on Saturday or Sunday, I put a post on LinkedIn that I think
is relevant here. And I was basically saying like why on the podcast, we choose to try and be
as objective as unbiased as possible, especially when it comes to things related to politics and
religion. There are times on the show where we have to wait into those areas. We have to touch on
things that can be politically challenging to talk about in a neutral way. And so I would say
this is one of those. We're going to do our best to just sort of present the information as best we
can describe it in a factual way. And then allow people to sort of form their own perspectives and
points of view. So if you're new to the podcast, that is how we approach this kind of thing. And I
just wanted to say that up front because some of these things are hard to talk about in this neutral
way. And I'll do my best to share that perspective. So Mike, I'll turn it over to you and you can
kind of walk us through it happen. And then I will do my best to unpack it all. Sounds good, Paul.
So this past week, the Trump administration blacklisted anthropic from all federal government work
and that kind of capped off this extraordinary bitter showdown over the future of AI and warfare.
So up until this week, interestingly enough, anthropic was one of the more deeply embedded AI
companies in US defense. And that's thanks to a $200 million contract that was awarded back in
July of 2025. And as a result of that, Claude was actually the only frontier model approved for
the military's classified networks. And it was deployed in some of those through a partnership
with Palantir, but as part of its contract and anthropic had these two non-negotiable safety
conditions. One, Claude could not be used for the mass domestic surveillance of Americans. And two,
it could not be used to power fully autonomous weapons. Now, the Pentagon led by defense secretary,
or I guess war secretary at this point, Pete Higgseth, who had recently declared at SpaceX's
headquarters that military AI quote will not be woke, decided these guardrails were unacceptable.
And reportedly, a flashpoint might have come after the US military raid that captured
Venezuelan president Nicolas Maduro, the Pentagon claimed anthropic raised concerns about Claude's
use in that operation, though anthropic CEO Dario Amade, flatly denied doing so. But regardless,
whether it's a miscommunication or not, there was a play by play that started right around on
Tuesday and into Wednesday and that carried through the week and into the weekend that got
very, very, very quickly. So I'm going to kind of walk through very quickly what the timeline was
of the major events that will dive deeper into this. So on Tuesday, apparently Higgseth called Amade
in a tense and according to the reporting not warm and fuzzy meeting at the Pentagon and delivered
an ultimatum that anthropic must allow the military unfettered access to Claude for all lawful
purposes and gave Amade until 501 PM Eastern on Friday to comply. If he refused, the Pentagon
threatened to invoke the defense production act to force compliance or they threatened to formally
designate the company a supply chain risk. So on Wednesday, overnight, the Pentagon apparently
sent anthropic. It's kind of best and final offer of how this could all work given anthropics
concerns about those two red lines that they don't want to cross. Anthropic reviewed the contract.
They felt the new language was not good enough. It was a bit of a facade and the concessions were
paired with all sorts of escape patches and legalese that would allow perhaps in their view the
military to disregard the safety guardrails they found to be so important. Now Thursday,
this starts to really explode into the public view. Amade releases a statement declaring that
anthropic cannot in good conscience a seed to their request. This causes all sorts of political
backlash. The Pentagon's technology chief, Emil Michael took the X calling him a liar with the
God complex. Meanwhile, the tech industry started to mobilize hundreds of employees from Google and
Open AI signed open an open letter urging their executives to also stand in solidarity with the
red lines anthropic had drawn and then deadline day, Friday hits and they're kind of a play by
play during the day plays out. They had some kind of crazy consequences for what's going on here.
So in the morning behind the scenes, apparently, Hague says team offered a major concession agreeing
to remove loophole phrases from the contract. In the afternoon, the deal apparently fell apart
completely when anthropic learned the Pentagon still wanted to use AI to analyze bulk data collected
from Americans that crossed that line on mass surveillance. Furthermore, anthropic rejected a
proposed compromise to simply keep clawed in the cloud to distance it from what they would call edge
based autonomous weapons. Desperate to prevent a collapse, topped by partisan senate defense leaders
sent a private letter begging the Pentagon to extend the deadline about an hour before the 501
PM deadline president Trump took to truth social calling anthropic left wing nut jobs and ordering
all federal agencies to immediately cease using anthropic technology initiating a six-month phase
out period. By the one PM happens the deadline passes. Amade has not caved to any of the demands
and in the evening, Hague said officially designated anthropic a supply chain risk to national security.
It is worth noting here this is an extraordinary black listing tool that is historically reserved
for foreign adversaries. It bans any defense contractor from doing commercial business with
the company anthropic immediately vowed to challenge this in court and Friday night and will talk
through this as well in a big final twist hours after this happens to anthropic Sam Altman
announces that open AI has officially reached an agreement with the Pentagon to deploy its models
on classified networks and the catch is that the Pentagon actually apparently agreed to open AI's
terms which included keeping the deployment strictly in the crop cloud and enforcing the exact
same prohibitions on mass surveillance and autonomous weapons that anthropic had just faced
resistance for defending. So Paul, a lot to talk about here. Can you maybe walk us through in depth
more of what is going on here? What parts of this are worth paying attention to and importantly,
kind of what you think might happen next? Yeah, I was just saying we were getting started. This
literally is an entire, we could do two hours just on this topic. It is, there's so much time
packing. It is not done. I feel like we are probably in the early stages of all of this a lot
happened in three days, but I feel like there's a lot more to happen to rewind back to even just
before everything started going crazy on Wednesday. Time magazine actually had an article. I think this
is relevant because my assumption is all this was already happening behind the scenes and this was
not accidental that this came out. So time magazine, we'll put the link in the show notes.
As the story says, anthropic, the wildly successful AI company that has cast itself as the most
safety conscious of the top research labs is dropping the central pledge of its flagship safety
policy. The company decided to radically overhaul the responsible scaling policy that we've talked
about on the show many times. That decision included scrapping the promise to not release AI models
if anthropic can't guarantee proper risk mitigations in advance. So we felt that it would actually
help anyone. This is a quote, we felt that it wouldn't actually help anyone for us to stop
training AI models. Anthropics chief science officer Jared Kaplan told time in an exclusive
interview. We didn't really feel with the rapid advance of AI that it made sense for us to make
unilateral commitments. If competitors are blazing ahead, it commits to matching. So they're
anthropic matching or surpassing the safety efforts of competitors. And it promises to delay
inthropics AI development. If leaders both consider anthropic to be the leader in the
I race and think the risks of catastrophe to be significant. So again, just relevant context that
ironically 24 hours before all this happened, an article comes out saying anthropic is shifting
its safety policies that are the foundation of the company for whatever that's worth.
Okay, so then expanding on some of the items you touched on like. So the first is this anthropic
statement. So basically Dario and anthropic get this word from the government that they basically
have like three days to concede to these requests. So anthropic released a statement on Thursday.
And all this is out publicly. So we're like, oh, what are they going to do? What's going to happen
Friday? So they released the statement on Thursday, the 26th. And in that statement, that's
attributed to Dario. It says the Department of War has stated they will only contract with AI
companies who has ced to quote any lawful use and remove safeguards in their cases in the
cases mentioned above. They have threatened to remove us from their systems. If we maintain these
safeguards, they have also threatened to designate that designate us. They supplied chain risk
or label reserve for US adversaries never before applied to American company and to invoke the
defense production act to force the safeguards removal. These latter two threats are inherently
contradictory. One labels us a security risk and the other labels clawed and ascent as essential
to national security. The statement continued regardless. These threats do not change our position.
We cannot in good conscious a seed to their request. Then, you know, everything's kind of goes
24 hours later. And then the tweet because he andthropics like we still haven't gotten any
official word from the government, all we have is a tweet from Hegsith. So on Friday at 5.14 PM,
this is following the Trump truth social thing. It says this week inthropic delivered a master class
in arrogance and betrayal as well as the textbook case of how not to do business with the United
States government or the Pentagon. Our position is never wavered and will never waver. The Department
of War must have full unrestricted access to inthropics models for every lawful and all caps
purpose in defense of the republic. And then it goes on and on and says in conjunction with the
president's directive for the federal government to cease all use of inthropics technology.
I'm directing the Department of Board does seem to inthropic a supply chain risk for national
security effective immediately. No contractor supplier or partner that does business with the
United States military may conduct any commercial activity withinthropic andthropic will continue
to provide the Department of War services for a period of no more than six months to allow for a
seamless transition to a better and more patriotic service. Dude, I can do an hour on that paragraph
alone. No kidding. I'm going to I'm going to come back around to the complexities of that paragraph,
but each sentence on its own contradicts itself in like five different ways, but we'll come
back to that. So then inthropic responds with another official statement. We held to our exemptions
for two reasons. So this is now that evening. And I'm at I'm at a hockey with my buddies and I'm like
right. We held to our exceptions for two reasons. First, we do not believe that today's frontier
AI models are reliable enough to be used in fully autonomous weapons, allowing current models to
use in this way would endanger Americans warfighters and civilians. Second, we believe that mass
domestic surveillance of Americans constitutes a violation of fundamental rights does designating
inthropic as a supply chain risk would be an unprecedented action. One historically reserve for
US adversaries never before publicly applied to an American company. We are deeply saddened by
these developments as the first frontier AI company to deploy models in the US government's classified
networks and up and as of this moment, still the only one capable of doing this by the way.
Anthropic has supported American warfighters since 2024 June and has every intention of continuing
to do so. We believe this designation would be both legally unsound and set a dangerous precedent
for any American company that that negotiates with the government. Dario then did an interview
Sunday morning with CBS where he's he kept using the terms retaliatory impunity, which obviously
is legal advice he's being given that you have to set this up as that's what these are.
In the statement, they continued no amount of intimidation or punishment from the Department of War
will change our position on mass domestic surveillance or fully autonomous weapons. We will change
a challenge any supply chain risk designation in court. Okay, so I'm going to stop there for a
second. I'm going to go into an Atlantic article which I found to be really solid about the real
sticking point here, but anything I'm missing, their micro that you want to like, double click on.
No, I think that sums up really well kind of where we're at as of right now in the timeline and
what has happened so far before we kind of get into some of the other lab responses and government
responses. All right, so then the Atlantic has this really good art. I think this was from Sunday
morning, if I'm correctly. So they they have inside sources at obviously when you're reading this.
So they said according to source familiar with the negotiations on Friday morning in
Thropic, receive word from Hegsis team would make a major concession. The Pentagon had kept trying
to leave itself little escape hatches in the agreements that are proposed to enthrop it. And it
would pledge not to use in Thropic's AI for mass domestic surveillance or for fully autonomous
killing machines, but then qualify those pledges with loop holy phrases. That's their word loop holy
phrases like as appropriate, suggesting that the terms were subject to change based on the
administration's interpretation of a given situation. I'm Friday afternoon in Thropic learned
the Pentagon still wanted to use the company's AI. This is real key. This is this is the sticking
point. They've been doing this since 2022, by the way. So this is not like, hey, we might do this.
They're doing this. Okay, so they learned in Thropic learned that the Pentagon still wanted to
use the company's AI to analyze bulk data collected from Americans that could include information
such as the questions you ask your favorite chatbot, your Google search history, your GPS
tracked movements, and your credit card transactions, all of which could be cross reference with other
details about your life. And Thropic's leadership told Hegsis team that was a bridge too far and the
deal fell apart. So the reason they want that kind of data is let's say there's like a protest and
they want to know which Americans were there. They would use clawed to analyze all this data and
basically match you and figure out if you were there or not. That's the kind of thing they're talking
about. I went on to say my source when I granting anonymity because they're not authorized to talk
about the negotiations also shed further light on the disagreement between a Thropic and the
Pentagon over autonomous weapons machines that can select and engage targets without a human making
the final call. This is real key. The US military has been developing these systems for years and has
budgeted 13.4 billion for them in fiscal year 2026 alone. They run the gamut from individual drones
to whole swarms that can be used in the air and the sea. Now this is really important and this
gets back to them dropping the thing from the responsible scaling policy. This is the Atlantic.
And Thropic has not argued that such weapons should not exist. To the contrary, the company has
offered to work directly with the Pentagon to improve their reliability. But for now,
Anthropics leaders believe that their AI hasn't reached that threshold. They worry that the models
could lead the machines to fire indiscriminately or inaccurately or otherwise in dangerous
civilians or even American troops themselves. At one point during the negotiation, it was suggested
that the impasse over autonomous weapons could be resolved if the Pentagon would simply promise
to keep the company's AI in the cloud and out of the weapons themselves. This gets into like
some more technical detail. Mike, you touched on this a little bit. I'm not going to go down that
path right now, but just understand that's a really important distinction that maybe we'll touch
on at a later date. And then the article containing Anthropics leaders might have hoped that other
AI companies would hold a similar line related to the surveillance and autonomous weapons.
Earlier in the week, they had reason to believe that open AI might.
CEO Sam Altman had said that like Anthropic, open AI would also refuse to allow its models to be
used in autonomous weapons system. Now, as I said at the start, we try our best to present
all sides of debates. So I'm now going to present to you something that and I'm going to connect
the couple dots here that I think are extremely important to understand. So this this counterpoint
is going to come from Palmer Lucky, who is the founder of Oculus, who sold that to Facebook,
made his hundreds of millions or billions selling that to Facebook. He is also the co-founder of
Andral Industries. Andral Industries is an American defense technology company specializing in
the development of wait for it, advanced autonomous systems. The company, Andral,
raised 2.5 billion at a 30.5 billion valuation led by, and he guesses Mike, founders fund a Peter
Thiel. Yep. So this is this is the name. This is the person that you want to keep track of,
founders fund and Peter Thiel. Peter Thiel is part of the PayPal mafia. That is where Elon Musk made
his first couple hundred million. It's where a lot of David Sacks, like all these guys, they're all
connected to PayPal. And then Thiel has the founders fund. So Thiel is, I'll get to it in a second.
Thiel is a very important character in all this. So here is Palmer's tweet. He said,
do you believe in democracy? Should our military be regulated by our elected leaders or corporate
executives? Seemingly innocuous terms from the latter, like you cannot target innocent civilians,
are actually moral minefields that lever, lever differences of cultural tradition into massive
control. At the end of the day, you have to believe that the American experience is still on
experiment is still ongoing. That people have the right to elect and unelect the authorities
making these decisions that are imperfect. Constitutional Republic is still good enough to
run a country without outsourcing the real levers of power to billionaires and corporations and
their shadow advisors. I still believe, Palmer says. And that is why quote, bro, just agree the AI
won't be involved in autonomous weapons and mass surveillance. Why can't you agree is so simple.
Please, bro, is an untenable position that the United States cannot possibly accept. So
Palmer's position is, and Palmer's very outspoken, like I actually really like listening to
Palmer. I think sometimes his perspectives are very different than mine. But as I said in my post
on LinkedIn, that's okay. Like we should listen. Sometimes there's things in there that it's like,
I actually, in theory, this is a really interesting perspective. Like,
you could say, like, inthropic, and I'm going to kind of divert here for a second. Like,
this is right, be careful. You can say that inthropic is making a moral
ethical stand here. The precedent it sets is that one company, in essence, one CEO gets to tell
the elected officials of our democracy what they can and can't do. What is and is not legal.
And that I think actually gets to the heart of what is actually going on here is there's the
slippery slope. It's like, okay, if we make these two concessions, well, what else does this
corporate bro from Silicon Valley get to tell us the elected officials of the government,
of what we can and cannot do? So this is the counterpoint that I've looked at lots of things,
read lots of things in the last like 72 hours. I thought he did the best job of just laying out
this like, listen, you can take the moral high ground. But at the end of the day, we are electing
people to make these kinds of decisions. Now, you might not like what this current administration
does with that power. But our democracy depends on these people. If we don't like what they do with
this, then then not let them out of office is basically what he's saying. So I'll stop there for a
second. Mike and see if you have any thoughts on this without us getting down to slippery slope here.
Yeah, I mean, I feel like it's every angle is a slippery slope here. But yeah, it is
interesting. I think it's important to I like all the context you provided because I think this
is very easy to get into a really quick like gut reaction black and white thing about this,
right? Because obviously something like autonomous weapon sounds super scary. But like you mentioned,
it is worth noting that Atlantic article like this stuff is not exactly new. It's already been done.
That doesn't make it good or bad. I'm not trying to value judge that. But it's not necessarily like,
oh, we are necessarily like coming up with this like awful sinister AI-powered thing. Now maybe
Master Rayland sounds a little different. But I would say it's a helpful perspective. I think
to have the nuance to this before we kind of get into where it goes, what it means. I think
it is very interesting to hear Paul Merlucky, who is that Peter Teele, act like raging against
billionaires having an impact on government is a unique position, I would say. So it's going
to be really interesting to see people speaking out of many sides of their mouths and how this makes
for some strange bedfellows as well. So then one of the things we often try and do in the show is
like, okay, what are the other perspectives? So Eliasetskva, who we've talked about many times,
who does not tweet often. Yeah. He showed up and he said it's extremely good that Anthropic has
not backed down. And it's significant that opening eye has taken a similar stance. This was before
we Sam jumped in and took the deal. He said in the future, there will be much more challenging
situations of this nature. And it will be crucial for the relevant leaders to rise up to the
occasion for fierce competitors to put their differences aside, good to see that happen today. So
again, that was before open AI stepped in and took the control. Right. Now Google, I don't know
what's going on there. So we have yet to see, and again, if someone sees this while we're on
this, like, flag us in the chat for us, I have not seen a tweet from Sundar or Demesis Abbas about
any of this. They've been radio silence since last Wednesday. The only person, the only major leader
at Google that I saw tweet anything was Jeff Dean. And this was on the 25th. He's the chief scientist,
Google deep mind and Google, Google research. So when it came out that the government was basically
presenting this ultimatum on Wednesday, Jeff tweeted, mass surveillance violates the fourth
amendment and has a chilling effect on freedom of expression surveillance systems are prone to
misuse for political or discriminatory purposes. Somebody then said, well, what about autonomous
weapons? He replied in 2018, I signed this letter in my position hasn't changed. We'll put the
link in the show notes, but the future of life Institute published a pledge on lethal autonomous
weapons in June, 2018. The in that pledge, it says AI is poised to play an increasing role in
military systems. There is an urgent opportunity and necessity for citizens, policymakers,
and leaders to distinguish between acceptable and unacceptable uses of AI. In this light,
we the undersigned degree that the decision to make human take a human life should never be
delegated to a machine. So June, 2018, there was 5200 AI leaders signed that. Okay.
Fun side note. So keep in mind. All right, the government wants to get rid of quad. We're going to,
you know, as of right now, you can't work with them. They're on supply chain risk list.
Okay. So who steps in? So this was all like, you know, there's five labs that could possibly
lose unless they're going to use deep seek from China, which isn't going to happen, but ironically,
they're treated more favorably now than our own American-based anthropic company. So you have
open AI. You have anthropic Google X AI. Who am I missing? I guess I do to even include
Meta in here. Meta. There we go. Okay. Those are the five companies that are building AI models
powerful enough in theory to do what the government wants. Yep. Well, who's obviously going to step in?
You know, I'm asked that that that was given. Ironically, I don't know who ceded this article,
but nothing is a coincidence at this point. The Wall Street Journal comes out with an article I
think it was on Thursday. It says, officials at multiple federal agencies have raised concerns
about the safety and reliability of Elon Musk's X AI artificial intelligence tools in recent months.
Grokford does not meet the safety and alignment expectations required for general federal use
within GSA general services administration and an experimental federal AI platform
said a January 15 executive summary. The warnings preceded the Pentagon's decision this week
to put X AI at the center of some of the nation's most sensitive and secretive operations
by agreeing to allow its chatbot Grok to be used in classified settings, which is what
Claude is currently used for. So Elon's trying to slide in and like get Grok in there.
Said anthropic was the only developer approved for classified use before the deal between X AI
and the military. But ironically, despite all this, it says in recent weeks, GSA officials were
told to put X AI's logo on a tool called USA AI, which is essentially a sandbox for federal
employees to experiment with different AI models. So while the logo is there, the tool actually
isn't available. So like one possibility and the most obvious one was Elon Musk and X AI,
but the government apparently and someone at the government felt like leaking this to the Wall Street
general said that tool is no way safe for our systems, even though we're being told to put it
into them. Okay. Then one other piece of context before we get into open AI sliding in and taking
the contract. Someone I follow and there's a, I think I mentioned Dean Ball last week on the
episode, maybe a podcast episode with him. So it's ironic timing. Dean Ball is a former senior
policy advisor on AI for the Trump administration and lead drafter of the administration's AI
action plan, which I remember at the time saying like, wow, someone who knows their stuff actually
wrote this, like this action plan is pretty good. So he provided context to the significance of this
moment on a series of expose on Friday night. So right after all this happened, 525 Friday night,
the United States federal government is now by an extremely wide margin, the most aggressive
regulator of AI in the world. Congratulations, everyone. Then 608. Think about the power,
Hegsith is asserting here, he is claiming that the Department of Defense or War can force all
contractors to stop doing business of any kind with arbitrary companies. In other words, every
operating system vendor, every manufacturer hardware, every hyperscaler, every type of firm,
the Department of Defense contracts with all their services and products can be denied to any
economic actor at the will of the Secretary of War. This is obviously a psychotic power grab.
It is almost surely illegal, but the message it sends is that the United States government is
completely unreliable partner for any kind of business. Damage done to our business environment is
profound. No amount of deregulatory vibes sent by this administration matters compared to this
arson. This is a Trump advisor. So give down all perspectives. The United States government just
essentially announced its intention to impose Iran level. This is before we bombed Iran, by the way.
Iran level sanctions or China level entity listing on an American company. This is by a
profoundly wide margin, the most damaging policy I have ever seen. The US government try to take
and it probably will not succeed. Okay, real quick, stop there before we get into Sam and then
we'll move on. There's a big topic. There's lots going on here. Yeah, I mean, this is probably a
whole topic in and of itself, but I feel like as we're doing the play by play and we're like,
oh, hey, like how is this affecting the AI industry? I think Dean hits the nail on the head with
this is about far more than AI. This is about the US government, essentially from like imposing
a command economy on the AI tools and technology. And it's pretty wild to see that kind of move
be used so I carelessly. I don't know as the right word, but just so suddenly it seems like
came out nowhere, especially my gosh, we've only talked about the administration in terms of
the deregulation piece and then suddenly it's in about face like they regulate the most innovative
company. 100% yeah. Okay, so then Sam 9.56 pm Friday night slides in with a tweet and says,
tonight we reach finger with the Department of Order to play our models in their classified
network. I'm not going to read this whole thing. Basically, they step in, they do the thing,
they claim the language is different and gives them like comfort to do this and actually claims
that by them doing this, it might actually help Anthropic and that they have actually advocated
with the Department of War to not impose these terms on Anthropic. And so he basically says we
think by doing this, it's for the best of humanity and the best of the industry. That doesn't go
well for them. Obviously, there's almost immediate backlash for doing this. Their own employees are
like, what are we doing? There's the letter you mentioned, Mike, people are signing that like we're
don't divide us. We're in this with Anthropic. So Sam feels the need to then do and ask me anything
session Saturday night on X and we'll put a link to this. There's things he touched about like why
the rush deal like why did you have to sign this so fast? Concerns about the precedent set with
Anthropic, the Department of War Black listing them. He gets into that. He talks about red lines
and lawful use. He talks about the designation of supply chain risk and how he will continue to
advocate. They should not do this to Anthropic, even to their competitors. And then he summarizes
it with like three key takeaways. He said there's more open debate than I thought there would be
at least in this part of Twitter, while whether we should prefer a democrat, a democratically
elected government or unelected private companies to have more power that goes exactly the point
Palmer lucky was making. He said, I guess this is something people disagree on, but I don't. This
seems like an important area to discuss. So he's saying he is in favor of the government are the
elected people and they're the ones that should decide how to use the technology to I think the
question behind a lot of the questions, but I haven't seen quite articulated. What happens if
the government tries to nationalize open AI or other AI efforts? He said, I obviously don't know.
I have thought about it, of course, it seems to me for a long time, it might be better if building
AGI were a government project. So he's actually in a passive way, kind of saying like that's really
not a bad idea that the government should probably be doing this, but it doesn't seem super
likely on the current trajectory. That said, I do think a close partnership between governments
and the companies building this technology is super important. And the third he said people take
their safety in the national security sense more for granted than I realized, which I think is a good
thing and balance, but I don't think shows enough respect to the tremendous work it takes for that
to happen. This is a side note. Again, you may not think this is as significant as I do,
but in 2025, and we've talked about this on the show, open AI president Greg Brockman and his wife
and I became mega donors to the Trump administration, in fact, the largest single donors,
giving $50 million to leading the future, a bipartisan super PAC focused on combating state-level
AI regulation, and $25 million to mega ink a troop Trump super PAC. Take that for what it's worth.
So I will say it's very surprising to me that open and the government could arrive at terms
on a contract so quickly. This is the government, nothing in the government.
It's actually gets done in 12 hours. So I don't even understand how this actually is even possible,
but I want to wrap up here, Mike, so we have time for all this other stuff.
What I tried to answer is what happens next. So here's a couple of thoughts here.
Open the open letter from open AI, Google and other supporting and
the topic. So we talked about this. This not divided dot org. We'll put the link in the show notes.
As of this morning, 645 Google employees and only 94 open AI, which I thought was interesting,
have signed this letter basically saying they support that Anthropic did not make these
concessions. I do think that open AI is in a race to save their talent. I think they're going to
lose staff quickly over this. I think this is the greatest talent recruiting
that Anthropic could have ever done is like if you care about safety, they already were the place
to be. They're probably getting flooded with resumes from top researchers. I'm out. I'm coming over
to Anthropic. Let's go. Another note, if the Democrats take back the House and the Senate in
the midterms this year, this is going to get insanely messy. So something to keep an eye on.
The other thing that I think is hilarious, and this goes back to these like the statement made
by Hegsith and how contradictory it was of itself, they're so much of a supply chain risk that
they're mandating everyone else stop working with them. But we're still going to use them in the
bombing of Iran over the weekend, but it was called was fundamental to that. And we're going to
keep them around for six months. So they're so dangerous that we're actually going to allow them
to stay in our systems for the next six months. The six month wind down I think is laughable.
Like I don't think this ever happens. I think they find a deal. What this administration does is they
they take extremes extreme positions as negotiating employees. You can go read the book that they
literally wrote on how to do this. They take the most extreme position. They do the most extreme
thing, say the most extreme thing, all to get you to meet somewhere in the middle or closer to like
their end game. So I don't think there's any way that this actually happens. I don't think legally
they can designate them this way. And I don't think they will. I think they will find a deal. I think
they were probably still talking through the weekend. And somehow they find a way to make this work
because the government needs anthropic. That is very obvious. Anthropic winds, hearts, and minds.
They jump to number one in the app store over the weekend from like 200s. So if you go look right
now, Claude is number one in the app store ahead of chat GPT. If the Hegsith post is to be taken
literally, it would be catastrophic for anthropic. They would literally bankrupt them and a massive
mess for the whole industry. Go back to what I just said. Anthropic raised $30 billion in February
co led by Peter Thiel's founder fund. So the dude who put JD Vance in office as the assistant
as the vice president, the guy who put Palmer lucky worry like the puppeteer of all of this
co led every round inthropic has raised. He was the first Silicon Valley guy to support Trump
in 2016. So anthropic as position is this. What do they call them? The left left wing nut jobs
or whatever it is. Right. Right. Feel is the guy that's funding them. Yeah. It makes no none of this
makes any sense. So when you when you zoom out, there's just no way this is how this plays out
because the other one who who is the chairman and co founder of Palantir, which is what the
Claude is being used through Palantir. Peter Thiel is the chairman and co founder of Palantir.
Right. So Google owns 14% of anthropic. Amazon owns 15 to 21% of anthropic. Microsoft owns some
single data. There's just no way that that this is how this plays out. So I'm going to stop there. I'm
like, if it does proceed and if they do actually, you know, officially designate these things
besides just a tweet, the legal case on this will run for years. I think it's just it's one of the
craziest things that ever it's not just an AI like one of the crazier news stories I've ever followed.
And it's only like three days old. So no kidding. I want to make sure we end up having like
time for all the other topics. But this is why we were going to just do this as three main topics.
Like it's so much to come on pack. But hopefully that gives some perspective. But Peter Thiel,
Palantir founders fund follow the money like anything in business and government follow the money.
I don't see any way that Thiel isn't talking to vans and vans isn't talking to Trump and
Hexith and all this stuff just gets somehow you make it go away. It's convenient that there was
a bombing of another country over the weekend. And you kind of like forget about some of these things
or they steal the headlines by the time you get to Monday. And I don't know the whole things wild.
Yeah, the only other thing I'll say is all these comments from the government despite the bluster,
despite the back and forth, despite the threads, they are just admitting they can't live without
God. And frankly, I agree with them on that because you'd have to pry a cloth from my cold
bed pants if you wanted to take it away. So that alone, that fact may confirm everything you just
said. We're like, good luck when you try to take this away from people. I didn't think about the
enterprise. And so like, yeah, it's the greatest marketing thing they've ever done. Someone in
the office this morning came in and it was like, oh, they didn't even need the Super Bowl ad. Like,
so if you're a massive enterprise in a highly regulated industry and you now know that the only
company for two years that the government trusted with in classified settings was anthropic.
Right. Right. Who were you going to trust? Sure as hell is not X AI, who can even get Groc
approved to do this stuff. Open AI apparently wasn't already there. Like, how were they not already
there? Right. And I have to admit, I have no idea where Google comes out in all of this. I just assumed
Google had already Gemini and classifieds. I don't know. Like, I'm really, really, that's the
most bizarre part of this. Like, I'm very anxious to see Google's official positioning on all this.
And they are, um, the silence is deafening. I would say from Google's perspective, it's very weird.
Google have a couple years ago already. Some issues were employees were like revolting against
defense contracts and stuff. I forget all the details, but I think we talked about so they
may be really gun shy on that. I guess no. Yeah. And Eric Schmidt, the former CEO and the chairman is
very aggressively in the building out of military capabilities. Yeah. Yeah. It's a tricky topic.
Well, I suspect that this will not be the last time we talk about it. I probably have a bunch more
to discuss on next week's episode as well. Seriously. All right. All right. So let's get into some other
main topics this week. So very closely related though, we're kind of talking about some other
updates related to open AI. So obviously, they're highly involved in the anthropic story with
their decision to accept a deal with the Pentagon. But also, they had a pretty big week and some
other ways that closed $110 billion funding round this week, which is the largest private financing
in history. They got 50 billion from Amazon 30 billion from Nvidia, 30 billion from Softbank.
And this round values up an AI at a $840 billion post money valuation and remains open for
additional investors. So this Amazon investment is really interesting. It's kind of the centerpiece.
So of this 50 billion, they're putting in 15 billion is up front. 35 billion is contingent
on open AI meeting certain milestones. The two companies expanded an existing cloud agreement by
$100 billion over eight years. AWS will become the exclusive third party cloud distribution provider
for something called open AI frontier. We talked about this on a previous episode. This is the enterprise
agent platform that open AI launched in early February. It helps basically helps enterprises
specifically deploy AI agents as configurable AI co-workers. On top of all this, open AI also
committed to consuming two gigawatts of capacity on Amazon's custom training chips. And one final
note about that open AI frontier product, the enterprise pushes extending there further because
they also launched frontier alliances, which is a program where they're partnering with McKinsey,
BCG Accenture and Cap Gemini to deploy this platform at scale. So Paul, busy and controversial
week for open AI, lots done, Packyard, that largest private financing round in history. I think
that deal with Amazon, and especially pushing frontier into enterprises in these partnerships
also merits some consideration here. Yeah, I mean, talk about a wild day for Sim Altman.
He tweets in the morning, we raised 110 billion, which if I'm not mistaken, the largest
private round was like their own 30 billion round or 40 billion dollar or something. Yeah, yeah.
So like two and a half times the largest ever. And then by 10 o'clock that night,
you're tweeting, we just stepped in and took this contract from the government. And just for
context, like the largest IPO ever was like 25 billion. So I mean, this is unprecedented amounts
of money. In the scaling AI for everyone blog posts where they announced this funding,
a couple of interesting notes. They said there are nine million paying business users for
chat GPT for work right now. I think that's the first time seen that number in a while.
They've moved from 800 million weekly active users to 900 million on chat GPT,
50 million consumer subscribers. So that's where they're at now. I think it's the paid number
within there. And then they say the valuation for this round increases the value of the open
AI foundation stakes. That's the nonprofit to over 180 billion further strengths of the
thing. What has already one of the most well-resourced nonprofits in history. And then
the frontier stuff I thought was interesting because we've talked a lot about how
one of the big challenges right now isn't the technology itself and what it's capable of
within the enterprises. It's enterprise adoption and change management. And so opening
I just straight up said like this is the problem. So there was some quotes in
tech crunch from the AI impact summit. It said one of the interesting things and some of the
inspiration for the work we've been doing lately. Open AI frontier is we have not really seen
enterprise AI penetrate enterprise business process. So in essence, they're trying to think about
how do we get this into the enterprises which leads to this alliance partners thing you talked
about Mike. So they said the limiting factor for seeing value from AI and enterprises isn't model
intelligence. That's how agents are built and run in their organizations. Real impact with AI
also requires leadership alignment workflow redesign integration across systems and data as well
as the kind of change management that drives adoption. So that's where they announce these four
components. And they basically say they're going to work with open AI's forward deployed engineers
or FDE which we talked about on a recent episode Mike about how they're staffing up to like put
these people. In essence, they imagine taking an engineer who can actually build stuff custom agents.
They go work in office with these major brands and they basically find ways to automate work.
And so what they're now saying is they're going to do this in partnership with BCG McKinsey
Accenture and Cap Gemini. And they highlighted the capabilities of each of these in this post. But
they said McKinsey and BCG each bring deep experience to help leadership decide how to start
redesign their operating model embed AI and drive adoption while Accenture and Cap Gemini both
advise on strategy and then help wire frontier into the systems and data enterprises actually run
on securely and reliable. So that's the premise of this alliance is they're trying through
through these companies that already have the relationships with all the enterprises and are
trusted by them. Let's bring them in matching with our forward deployed engineers and
and that's like rapidly push adoption the enterprise. So opening I needs adoption the enterprise
it's hard they've accepted that and now they're trying to kind of put these things together to
enable it. Yeah to just quick final notes there fun fact the forward deployed engineer that's
actually originated with Palantir that's right they embedded people with the military to deploy
their technology have been doing it for a decade or so. But also just kind of struck me you know
we'll talk about this with a number of other topics coming up. But when we talk about on the podcast
this idea that the total addressable market that these companies are going after is not software
it's not SAS it's employment it's salaries right this is the start of that this is how agents
getting into the enterprises as co-workers this is not to be alarmist but that is how this happens.
You can get alarmist if you want because it's going to save it for that
but I don't even mention that in a super negative way out of the gate it's more like
you are seeing the actual steps be put into place for these things to become your co-workers at
the enterprise level. Yep it's happening faster than most people. Yes it is. All right so let's
kind of get into some of that because our third big topic this week is about an interview with
the creator of cloud code. So Boris Cherny who leads cloud code at Anthropics sat down for an
interview on the podcast called Lenny's podcast this week and he made this really interesting
claim basically saying coding is effectively solved. He said he has not edited a single line of code
by hand since November 2025. Every line has been written entirely by AI. Interestingly Cherny
built cloud code as a side project at Anthropic in September 2024 and within five days of
releasing it internally half of Anthropics engineering team was using it. The product is now
generating a billion dollars an annual run rate revenue and four percent of all public
GitHub commits are authored by cloud code. Cherny predicts that will hit 20% by the end of
2026. As a result of using cloud code Cherny had said on the podcast that engineering output in
Anthropic has increased 200% per engineer and that there's also a real shift at the organizational
level where on his team everyone product manager designers finance people everyone codes and he
predicts the by the end of the year. Everyone is going to be a product manager. Everyone will code
the title software engineer will actually start to go away and he actually said it's going to be
painful for a lot of people. So Paul let you kind of run with your takeaways on this episode. I
know you had a ton of them. I'm just curious though. Obviously neither of us are software engineers
but you know I'm wondering do you buy that claim coding is basically solved. I know there's some
bigger implications therefore knowledge work at large. Yeah that was what I wanted to highlight
this one as a main topic. I tweeted after I listened to this episode that like some Lenny's podcast
episodes you listen to twice and this is this is one of them. Yeah. And I have listened to it twice now.
So it was I think this is the first time I heard Boris talk. I'm pretty sure it's the first
interview of his that I've listened to but following him now for probably six months
closely on on X and he's pretty active so you learn a lot from him. But to hear his his story
kind of how he created cloud code almost as like a hack. He was just like messing around.
I thought was fascinating but I think the real key and the reason we wanted to highlight this and I
would recommend people listen to it is replace code with whatever you do and when you're listening
to this podcast. So let's say you do marketing him or consulting or whatever it is just every time
he says code replace it with the things you do the tasks that you do for your for your living
because that's what he's implying is like whatever we just did with coding is now going to come
to the rest of knowledge work and he says that basically. So I think that first it's interesting
when this interview dropped and then relevant to everything else we're doing it said the
that the thing that drew me to inthropic was the mission it's all about safety and when you talk
to people in inthropic everyone you find in the hallway that's all they want to talk about is
is safety. So one of the key things he talks about is the way that inthropic approaches training
these models and what they're building he said specifically related to building of of cloud code
we wanted to ship some kind of coding product for inthropic for a long time we were building the
models in this way that kind of fit our mental model of the way that we build safe AGI which is
interesting because usually people in inthropic don't use AGI where the model starts to be really
good at coding then it gets really good at tool use then it gets really good at computer use
and that's their trajectory I'm going to repeat this again because this is like the me if you take
nothing else away from this this is the thing you have to understand coding is it's good at writing
code tool use is it doesn't just rely on the language model itself to do the outputs to write your
briefs to come up with creative ideas for your campaigns things like that tool use is access to
other things like the internet like search so the ability to go use other tools to improve the
language model and the computer uses seeing everything on your screen digitally and then being
able to act in a digital way that's computer use agents that can do things on the screen just like
you and I would so that was the real key he talks about innovation and how there's no road map
for innovation you have to give people space and even if like 80% of the ideas are bad that's okay
like you never know when the good things going to come he didn't think cloud code was going to work
he didn't really even think it was that thing of a thing when he first did it you touched on how
much of his code is written by the AI 100% he said he hadn't written a line of code since like November
but I thought there was an interesting note where he said like in February it was like 20%
then by May it was like 30% and then by November is 100% so this is not like a multi-year thing this
was like within eight months it went from 20% of his work to 100% of his work and but they're
still hiring more people but these people are like four acts more productive than they used to be
and then when it got into what's next for Claude he did say Claude is starting to come up with ideas
which I thought was really interesting because if we think about OpenEI's level of intelligence
chatbots one reasoning models two agents three innovators four innovators come up with ideas
and I'm increasingly seeing people within the labs talking about the fact that these models are
functioning as innovation partners they're actually starting to come up with and solve problems
and then he gets into the idea that this is coming to all knowledge work and then that's where
they're going a couple other quick notes I thought we're interesting he talks about token budgets
so like when a coder is starting to work at a lab it's like what's my token budget meaning how
much access to intelligence do I have how much can I spend using the AI myself to do my job
and I think that might become true for knowledge workers like a marketer a sales restaurant say
hey okay my salary is 150,000 a year what's my AI budget like how how much AI can I use to help me
so if you're telling me I have to keep my flat my head count flat how many AI agents do I get and
what's my budget on those AI agents that may sound weird that is already happening and coding
and I could see in other industries by the end of 26 that becomes a regular conversation
negotiations is like what is my agent budget so I would definitely go listen to this podcast again
it can be kind of technical from a coding perspective but if you think about he's like
pretend like he's just talking about your profession and replace coding with whatever you do
I think it starts to give people a really good understanding of where this is all going
and his like analogy the printing press I thought was really well stated about like I might be the
best way to look at this so good interview Lenny does an amazing job it's a great podcast I would
definitely go check out that episode all right Paul before we dive into rapid fire this week
just one more quick announcement that this episode is also brought to you by our state of AI
for business report so we are currently running a short survey to inform our 2026 state of AI
for business report this is an expansion of our popular state of marketing AI report that we've
done for the last five years so this year we're actually going beyond marketing specific research
to uncover how AI is being adopted and utilized across the organization so to do that we're aiming
to hopefully survey thousands of business professionals across every industry and every function
we would love for you to be one of them if you go to smarter x.ai forward slash survey the survey
literally only takes about five to seven minutes to complete in return for completing it will send you
a copy of the full report when it drops and you also are entered to for a chance to win or extend
a 12 month smarter x ai mastery membership so that link again is smarter x.ai forward slash survey
we would love if you could take that for us if you have not already we'd love to hear from you
and I'm just going to note so again if you're listening to this podcast we have a live audience
with us today as I mentioned up front our master rumors and I just glanced at the chat
so and I'm glancing at my x-feed as right now I can't see any updates on the entire thing but I
do see that both Claude and chat gbt were having issues this morning not working could be completely
coincidental maybe I don't know there's no right the conspiracies are probably already
yeah this is like I haven't for conspiracy everything going on right now all right so let's dive
into some rapid fire this week first up Jack Dorsey announced this week that his fintech company
block is cutting roughly 4,000 employees which is nearly half its global workforce and he explicitly
has named a i as the reason block will shrink from over 10,000 workers to just under 6,000
Dorsey was pretty direct about the decision in a statement on x seed said we're not making this
decision because we're in trouble basically was saying a significantly smaller team using the tools
we're building can do more and do it better rather than cut gradually Dorsey said he chose to move
all at once he was basically highlighting this idea that repeated rounds of cuts are destructive
to morale to focus to trust that customers and shareholders put in a company and the market rewarded
initially this decision block stock surge more than 24% in after hours trading and however some
analysts have noted kind of pushback on the fact that block employed only just under 4,000 people
before the pandemic they ballooned during a hiring spree to the very recent 10,000 plus workers
and they raised these questions about whether AI is the real driver here or a convenient frame for
reversing over hiring so pod let me get you're taking this there are quite a few polarizing
opinions on this one some people think he's just like what they call AI washing or basically using
AI as an excuse to let people go to correct for that kind of over hiring he's denied that I believe
publicly in post on X others however saying this is the canary in the coal mine that signals a
coming wave of AI driven layoffs where he do fall on this so he did address this idea that it's
AI washing and he tweeted yes we over hired during covid because I incorrectly built two separate
company structures square and cash app rather than one which we corrected mid 2024 but this
misses all the complexity we took on through lending banking and BNPL I don't know what that means
I have to look that one up and that we're now targeting two million plus gross profit per person
do you have what it is by now pay later it's like like a firm and all those companies that let you
like paying install okay exactly yeah okay so they're now targeting two million plus gross profit
per person 4x our pre covid efficiency which stayed flat at 500,000 K or 500,000 from 19 to 24
we have and do run an efficient company better than most so he's basically saying like that is not
the case we're already corrected for overhiring okay I'll just note Harry stepings who's one of
my favorite podcasts host he's got 20 VC great podcast he tweeted this was yesterday morning I have
spoken to three founders in the last 48 hours all of them with 500 to 1000 employees each of them
is planning a minimum 20% headcount reduction said with great concern this is about to get very real
for labor markets as I've said on the show many times I have talked to companies who have told me
point blank they are gearing up 10 to 20% layoffs at any given moment they have contingencies in place
of who those people are going to be it this is real like I do think that one person doing it sort
of starts to give cover to the others or just like last year in spring we had Toby Ludkey start
talking about the reflexive need for people to infuse that and everything they were doing and then
it led to other you know CEO saying I think you're going to start to see this kind of thing it'll
it'll it'll start in tech that's where it always starts with this kind of stuff but I have talked
to a lot of non tech companies that are also planning for a 10 to 20% reduction of the workforce
in 2026 yeah and I think it's interesting to see kind of the discussion and debate around these
kinds of announcements because again I know this gets headlines and people start getting really
polarized around it but really it's a fundamental point to me at the end of the day it's like when
you hear something like Boris I applaud code saying developers or whoever now 200% more productive we
see that in other forms of knowledge work the fundamental question is is that possible or is it
not and I think the answer for us is a resounding yes that's possible if that's possible that doesn't
happen in a vacuum that has ripple effects and that's why you have these people considering
right leaders considering these decisions because if you can be that much more productive the
math kind of does itself from certain financial metric perspectives I will see some people trying
contradict this like well then why is anthropoclaring well because they're growing at 10 x per year
in their revenue if you were growing at a thousand percent per year you would be hiring more people
too right most companies aren't growing even double digit percentage so if you're growing at
single digits and you can be wildly efficient and productive with fewer people you're going to do
it and that's the reality it's like you can't look at these labs and say oh well they're hiring
salesman well of course they've heard it's like high for growth companies yeah they can be 4x more
productive is still need more people yeah all right so next up there is a research essay published
this past week on substack that actually made it to Wall Street trading desks and triggered
a very real sell off this essay it's called the 2028 global intelligence crisis it's written by
James van Guilin of Satrini research and Alep Shah a former Citadel analyst and it basically
models what could happen if AI displaces white color workers at the pace current capabilities
suggests the thesis centers on what these authors call a human intelligence displacement spiral
basically a negative feedback loop there's no natural break and they again pretty upfront say
they're just hypothesizing their modeling one scenario but in this scenario as AI agents replace
software engineers financial advisors and middle management companies start laying off workers to
expand margins they reinvest the savings into more compute and accelerate further displacement
the essay then kind of introduces two core concepts one is ghost to GDP where AI generated output
benefits compute owners but never circulates through this consumer economy and the death of
friction where AI agents optimize optimize away the inefficiencies that entire business models
depend on so this hypothetical scenario unfortunately gets pretty grim it projects this doomsday
economic scenario that results from these factors including a national unemployment rate reaching
over 10% and S&P 500 crash of 38% from its peak and a deflationary spiral all by 2028 and the
essay racked up tens of millions of views on X a prompted Citadel securities to publish a formal
rebuttal the authors themselves clarified hey this is just a scenario not like a hard and fast
prediction and poem curious whether or not someone disagrees or agrees with this essay like
it moved markets and I'm curious why you think that's the case it just seems wild to me that
this I mean random interesting analysis to be sure but just like a random essay can now take the
market it's the most practical feasible analysis I've seen of what could happen so like if you
think about like situational awareness and all the buzz that that series of papers created
this is better in the sense that this is much more approachable and understandable to like the
average business leader yeah or worker so I'll read a few excerpts I would highly recommend
going and reading this whole thing so again we are not saying this has an 80% probability of
coming to reality their whole point was to lay out a possible outcome and I will tell you reading
this thing there was lots of it where I was saying it's greater probable outcome that this is close
to the truth than what most people think and and that's hard to wrap your mind around but like I'll
just give you a few things so they're writing this from the perspective it's like June 2028 and
they're kind of summarizing what happened so it starts with the consequences of abundant intelligence
says two years that's all it took to get from contained and sector specific which is where we are
today to an economy that no longer resembles the one any of us grew up in the quarters macro memo is
our attempt to reconstruct the sequence a post mortem on the pre crisis economy so then they're
rewinding back October 2026 the S&P flooded with 8000 NASDAQ broke 30,000 the initial wave of layoffs
due to human obsolescence began in early 2026 and they did exactly what layoffs are supposed to do
margins expanded earnings beats stocks rally good block layoff 4,000 people stock was up 17%
record-setting corporate profits were funneled back into AI compute the headline numbers are still
great nominal GDP repeatedly printed mid-high single-digitalized growth productivity was booming
everything's looking great the owners of the compute saw their wealth explode as labor cost
vanished meanwhile real wage growth collapsed despite the administration's repeated
boast of record productivity white color workers lost jobs machines and were forced into lower
paying roles talks about the ghosts gdp where it's like it's improving but it's not really seeing
the impact on the people in a positive way goes into how it started so again imagine the
cut topics we just talked about with Boris and Claude it says in late 2025 agentec coding tools
took a step function jump in capability go listen to the episode we talked about December 2025
and what changed a competent developer working with Claude code or codex could now replicate the
core functionality of a mid-market SaaS product in weeks not perfectly or with every edge case
handle but well enough that the CIO reviewing a $500,000 annual renewal started asking the question
what if we just build this ourselves the companies most threatened by AI the software companies
became AI's most aggressive adopters software was only the opening act what investors missed
while they debated whether SaaS multiples at bottom was the reflexive loop had already escaped
the software sector the same logic that justified service now cutting headcount applied to every
company with a white collar cost structure by early 2027 large language model usage had become
default people were using AI agents who didn't even know what an AI agent was in the same way people
who never learned what cloud computing was used streaming services they thought of it the same way
they thought of auto completers spell check a thing their phone just did it started out simple enough
agents removed friction i will not i i would love to read the rest of this it's insane um go read it
it is don't read it as fact don't read it as this is exactly what's going to happen
but when i if you just think what i just read to you all of that is already playing out there they're
talking about the reality of what is currently going on and they're projecting out how could this
play out if the exponential curves that these labs are seeing remain true if back in February of 2025
when boris looked at an exponential curve and said wow by the end of the year it's going to be
writing a hundred percent of my code and he was right that's what's dictating all the decisions
it's dictating why google has spent a 180 billion on capex this year all of those exponentials is why
they're so confident in a future that probably looks closer to what this article says then what
youth or others think the future looks like hmm all right so next step according to at least some
measures public support for AI data centers appears to be collapsing so a recent poll by embold
research found that only 28 percent of americans now support data centers near their communities
with 52 percent opposed that's a net support of negative 24 percent down from barely positive
2 percent just months earlier so I was skewing slightly positive just months earlier data centers
now pull worse than natural gas plants solar farms wind farms and nuclear facilities interestingly
opposition on this issue appears to bridge the political spectrum left leaning advocates I get
so people that do not support data centers are often angry about water and energy strain while
right leaning activists view it as elite big tech overreach not that would both sides hate you
not good when both sides good and notably the deepest opposition right now is among rural
republicans with negative 20 percent net support and there's some really public facing issues
that we're seeing where this backlash is becoming real for instance in south haven mississippi
Elon Musk's Elon Musk's x AI actually installed 27 temporary gas turbines without permits to
power its colossus AI cluster there's these run 16 to 24 hours a day there's been a bunch of
reporting on residents complaining and fighting back they describe constant roaring pops high
pitch winding an x AI spent seven million building a sound barrier wall that neighbors nicknamed
the team who sound wall because it does almost nothing kind of like a cheap product you'd buy on
team who and there are some lawsuits potentially in the works around this so Paul it's just really
striking me like you said this issue cuts across traditional political divides I wonder like
this really data centers could be the lightning rod issue going into midterms they're just like
they take a bunch of boxes that in the way and not I'm not saying agree or disagree but like
a narratively of like easy to understand for the most part really visible visceral kind of symbol
of a lot of things people don't seem to like about AI yeah we've said many times one of the things
that slows this down a societal revolt and you need a tangible thing to revolt against and this
is an easy one it's easy to protest it's easy to like you said you know the videos online of
the loud noises all that stuff is very tangible and understandable to people it'll be very
fascinating to watch how they spin this politically because to your point like if Republicans in
real areas where the data centers are going hate these things but the current administration
is accelerationists in terms of like accelerated all costs forget EPA guidelines all the stuff
like just build build build but if they find that the polls are moving based on people's hatred for
these things all right it's like what what do you do then so I don't know the answers but this
is a really interesting thing to watch to see if it catches steam and actually starts to affect
messaging going into the midterms and then just separately like it's a problem like it
environmentally which talk to a lot of people who worry about the environmental impact of AI
this is a very tangible thing for them to latch on to and like focus on the impact it's having
so it's a big issue on a lot of fronts you know one just final aside here I found interesting this
is definitely way further down the line but we've talked about that plan to like build data centers
in space and we'll see how that actually that ever happened but if someone made a really
interesting point in a post online they were like hey you know what another advantage of data centers
in spaces people can't go burn them down you know so in this idea of like hey you should know
it was like a bad flat yeah you're every person right you know so I think they were kind of
getting to this point that hey these are an unintended perhaps benefit of those is that they
sidestep this issue but again this is yeah possibly a fantasy we'll see I don't want to spend
a topic now on like data centers in space but like have you seen a report yet on like what happens
when there's like two million of these things and they start like failing and they just burn
up in the atmosphere I have it yet really like got down the center in space rabbit hole but I
imagine a data center in space is pretty large so yeah I don't know what happens when it falls
back to her there might be a good question for us to run through clot or something one of these days
we'll go we'll go deep on data centers and squares exactly right all right so next up anthropic
has published what it calls the AI fluency index this is a framework for measuring how well people
are using AI so this is a kind of a study and subdata they collected the analyzed almost 10,000
quad conversations and tracked 24 specific behaviors that basically define effective collaboration
with AI and they're kind of central finding as most people are still not using AI as effectively
as they could and the better AI gets the worst this problem becomes because there's this kind of
headline finding here that the verification gap is still a big problem we've talked about this so
when clawed produces polished artifacts like working apps code formatted documents they found
that users become more directive but less evaluative so the better the output looks the less people
question it obviously anthropic says this is exactly backward and so they recommend a few things
people can do here so first stay in the conversation treat the first response as a starting point
not the answer they found conversations with iteration showed roughly double the effective behaviors
of those where users accepted the first output second question polished outputs most of all the
moment something looks good that's exactly when you should pause and start verifying things
and then third they say set the terms of collaboration up front only about 30% of users explicitly
tell clawed how they want it to interact but those who do show dramatically better results so
Paul I found this to be a valuable read glad anthropic is putting it together but I guess I just
stuff to ask we're three plus years into this and we're literally still coming out just now with
these like again really good but like basic stuff and clearly even advanced clawed users or maybe
further ahead of the curve people are not using these tools appropriately you goes back to the
frontier lines thing about the lack of adoption and understanding of what these things are really
capable of in enterprises in particular and I was glancing at this getting getting ready I was
thinking back to episode 193 where I talked about like basic intermediate and advanced users
yep so like the basic user treats it as an answer engine the intermediate user more as like an
assistant and advisor through continuous prompting and dialogue and then the advanced user co-worker
on demand subject matter expert like that's how you think about it and I really like the one chart
they showed about behavioral indicator prevalence and so it like prioritized these different
traits it's like iterates and refines is really important clarifies goal before asking for help
provides examples of what goods look looks like specifies format structure needed sets interaction
also kind of like sets out like what are the people who are doing it right doing and I think
that was like a helpful chart to see and again like a good report to go look at to understand kind
of where we are but again I always comes back to we are nowhere near as far along as most people
think like yes there are some power users doing incredible things like one of my buddies I was
walking into basketball Thursday night and one of the guys building like an AI native startup
he's like you mess you play with open claw you got it so I was like dude I don't mess with that stuff
like I don't know what I'm doing with open claw I'm not turning that thing loose on my computer
so it's like you have these people are just like racing out in the frontiers and doing all this
cool stuff but most people are still like it's just an answer engine like I just give it a prompt
and I'm happy you're not happy with its output all right next step anthropic has accused three
Chinese AI labs deep seek moonshot AI and mini max of running industrial scale distillation
attacks on clawed using more than 24,000 fraudulent accounts to extract over 16 million exchanges
worth of training data so distillation is like the attempt to kind of train a weaker model on a
stronger models outputs and it can be a legitimate technique when it's applied internally but
anthropic is alleging these campaigns basically amount to systematic intellectual property theft
so the scale varied by lab mini max ran the largest campaign with 13 million exchanges
moonshot AI extracted 3.4 million exchanges they were actually focus more on tool use and
computer vision deepsakes campaign was smaller at 150,000 exchanges but targeted reasoning
capabilities so anthropic frame this as a national security issue arguing that elicitly distilled
models lack safety guardrails and can feed capabilities into military and surveillance systems
interestingly enough Elon Musk had to win on this as well responding on x by calling
anthropic hypocritical he basically is referencing the fact the company just settled for 1.5
billion dollars over their use of copyrighted books to train quad and he mentioned the anthropic
guilty of stealing training data themselves at massive scale so I know this is a bit of a
technical topic but it does have pretty big implications for kind of the ecosystem overall
doesn't it yeah so it was interesting because anthropic google and open air all in the last like
21 days said the same thing is happening to all of them yeah so it's obviously a concentrated
attack by the Chinese firms to do this to like steal how it all the weights and how it works and
things like that there was an NBC news report in early February it's a Google says its flagship
AI chatbot Gemini has been inundated by commercially motivated actors or trying to clone it by
repeatedly prompting it sometimes with thousands of different queries including one campaign
that prompted Gemini more than 100,000 times interestingly and tying back to the clawed thing
I was just trying to find it and so there's this uh who is this guy it's like the undersecretary
of war I don't remember his name is I'll finally put it in there but I tweeted like he tweeted
something about uh okay so he was attacking anthropic it was on like Sunday and one of the lines
in his tweet was what's ironic is that there's been no bigger thief of america's public information
identity in mass or creators works than anthropic search the lawsuits so I read I usually stay out
of this stuff but I couldn't help myself so I retweet and said is anyone gonna tell him they all
did this yeah and so that was I let it go and then I see this morning that he someone did because
he deleted the tweet removed the part about them stealing everyone's copyrighted material and
retweeted his original tweet minus the part about it's not stealing every stuff because they're
all doing it so unfortunately this is the PR battle that the AI labs aren't going to win because
everyone's like well you still are stuff who cares if they're stealing your stuff right it is a
major problem it is important that they stop this from happening they're not apples to apples but
I get the whole like you stole our stuff it's cool if they steal yours but right we don't want
this happening this is a bad precedent all right next up in video reported fourth quarter revenue
of sixty eight point one billion dollars up seventy three percent year over year they beat
analyst expectations with these latest earnings uh the expectations were sixty five point eight
billion data center revenue hit sixty two point three billion now more than ninety one percent
of total revenue a segment that has scaled nearly thirteen times thirteen x since chat GPT
launched in late twenty twenty two um so their full year revenue reached two hundred fifteen point
nine billion net income hundred twenty one point one hundred twenty point one billion guidance
for next quarter came in at seventy eight billion dollars more than five billion above what
analyst expected you would typically think that should be good for in video uh however despite
the beat on every metric and video shares fell five point five percent following day erasing roughly
two hundred sixty billion dollars in market value the largest single day decline since April
twenty twenty five can you clarify this from a poll it certainly seems like these should have these
results should have delighted markets yeah I mean they're they're basically projecting out years
in advance now on demand it it makes no sense when you look at it on the surface I don't pretend to
be a day trader smarter than yeah irrational markets in essence um I'm just looking at other down
four percent I don't yeah um Dow and NASDAQ reached down about one percent so in videos down significantly
more than the market as a whole um despite you know that I don't know how much I'm starting a war
has to do with this but um that like it doesn't it seems like the market as a whole doesn't care
that we're we're doing what we're doing in the middle east um so I think it in videos basically
just being punished for I don't know what for for for breaking records and projecting well I
I don't try and figure this stuff out I just bet long term on companies I think are going to do
really well as there's more demand for AI and I would consider Nvidia to be one of those companies
well also just worth looking at those numbers regardless of the Wall Street response just because
I mean who knows if we're in a bubble how long this will last but those numbers are still going
pretty strong yeah I think the biggest problem is that there's the revenue is centered on eight
companies probably predominantly and people worry about that and the circular investments are real
like they just put 30 billion into open AI that's going to be spent on 30 billion on Nvidia chips
like right and so some people worry that it's very bubble like in how this is the mechanisms with
which these investments are working is we'll put money into you you will turn turn around and spend
that money with us and cloud costs chip costs things like that and I get that I don't I'm not
debating that that's interesting yeah for sure all right pulsator wrap up here we've got a number
of AI product and funding updates I'm going to run through these very quickly as we wrap the
episode you have anything to add please chime in so first up google release nano banana two it's
next generation image model that combines the quality of nano banana pro with sub second generation
speeds including in 4k images is rolling out now across the Gemini app in 141 countries separately
google acquired producer AI formerly the viral AI music tool called refusion bringing the startup
in Google labs and pairing it with deep minds lyria 3 music generation model this platform lets you
generate full songs create music videos and build custom instruments from text prompts with paid
plans starting in eight bucks a month and thropic has expanded its co work platform with a number
of enterprise plug-ins which are prebuilt bundles of skills and tool connections for specific
job functions including HR design engineering finance and operations the update also includes
private marketplaces for internal deployment and new connectors for google workspace docu sign
and fact set among others peak elabs which is known for its AI video generator pivoted into a
different product entirely creating persistent AI digital twins called AI cells users upload a
selfie record their voice answer personality questions and set their AI loose to act autonomously
across slack what's app i message and social platforms the company launch this with a retro
futuristic infomercial and its employees own AI cells tweeted autonomously about the product
last but not least there is some more shake up at thinking machines lab two more founding members
of mirror maraudis thinking machines lab quietly left for meta bringing total departures to at
least seven since the startup launch less than a year ago this company has raised two billion
dollars so far to twelve billion dollar valuation but is lost co-founders and early researchers to both
meta and open AI driven by what fortune is reported as money constraints compute constraints any lack
of clarity on product that is an aqua are waiting to have if if thing that doesn't get acquired
the next 90 days i would be shocked and one final note here like we always do we now have a new AI
poll survey that will go live for this week based on this week's topics of the two questions
we're going to talk about are where do you stand on the anthropic Pentagon dispute over AI safety
red lines and then second block cut nearly half its workforce this week and named AI is the
reason what's your reaction be very interested to see the pulse of those two questions what changes
between now and next week yeah no kidding so go to smarter x dot a i forward slash pulse and you
will find this week's survey there when you hear this episode on Tuesday March 3rd and we would
love to hear from you so Paul really really appreciate um you break and everything down for today
you know thanks to everyone for tuning in to episode 200 whether you're joining us live or
listening to us on your podcast streaming platform uh if you want to catch the q and a that we're
about to do uh offline here AI mastery members can watch it in your account and if you're not a
member yet head to academy dot sparter x dot a i to join and get access so Paul thanks again
yeah and thanks everyone for the 200 episodes like i you know this time last year we had about 40
thousand downloads a month and now it's about 130 thousand downloads a month so the listeners ship
to the podcast is as blown up in an incredible way and i know Mike when we're out in the public and
get to go do talks and stuff like the amount of people come up to us and say they listen to us every
week it's it's awesome to see and it's not something we ever expected um we literally just started
doing this to synthesize information every week for ourselves it was a pretty selfish reason why
we started doing the weekly you know four plus years ago um so yeah we're just grateful for everybody
that listens and for all the personal notes we get and um you know for the grace when you know
maybe we don't always stick as closely as we try to neutrality and everything we say and do um
you know we're doing our best so yeah hopefully we'll have another couple hundred episodes and we
can celebrate we're gonna try and do something special 300 so yeah hopefully we'll get to
sometime next year we'll get to number 300 so thanks everyone for listening and for our
a master member stick with us we're gonna get into the q and a so everyone have a great week
i'm sure we'll have plenty to talk about next week thanks for listening to the artificial
intelligence show visit smarterx.ai to continue on your a i learning journey and join more than 100
thousand professionals and business leaders who have subscribed to our weekly newsletters
downloaded AI blueprints attended virtual and in-person events taken online AI courses and earn
professional certificates from our a i academy and engaged in a smarter x slack community until next
time stay curious and explore a i
The Artificial Intelligence Show



