Loading...
Loading...

These things are getting really good, really fast,
and it's not just gonna be happening
to computer programmers this year.
It's gonna start happening to everybody else.
Welcome to the Artificial Intelligence Show,
the podcast that helps your business grow smarter
by making AI approachable and actionable.
My name is Paul Ratesor.
I'm the founder and CEO of SmartRex and Marketing AI Institute,
and I'm your host.
Each week I'm joined by my co-host
and SmartRex Chief Content Officer, Mike Kapoor.
As we break down, all the AI news that matters
and give you insights and perspectives
that you can use to advance your company and your career.
Join us as we accelerate AI literacy for all.
Welcome to episode 201 of the Artificial Intelligence Show.
I'm your host, Paul Ratesor,
along with my co-host, Mike Kapoor,
a recording Monday, March 9th, 9.30 AM Eastern time,
which is relevant because Microsoft
already dropped co-work today,
which we've been waiting for.
So we will, we did a last minute ad,
we'll get to the Microsoft co-work announcement
a little later on in the show.
So I don't know, it was interesting, Mike.
It was kinda like, I didn't want to jinx us going into this,
so it gave us kind of a quiet week,
say from an AI perspective.
But some really interesting main topics
we're gonna hit on that,
I think it into some of the bigger issues
moving forward around jobs in the economy.
It was a cool report from Anthropic,
we'll touch on a really interesting article
from a partner at Sequoia,
and then just getting into around adoption.
But, well, I don't know, I say quiet week,
but we got 5.4 from opening.
I was like nothing happened last week,
but we're so used to these like,
just a lot of stuff that, you know,
probably what used to be a busy week is just feels like,
oh, okay, good, that's gonna be easier.
It's something like an hour and a half of prep
before the show did, and no doubt.
All right, so today's episode is brought to us
by the state of AI for business report.
This is a new report we are working on,
but we need your help.
So right now the survey is in the field.
This has given us a chance to hear
how you're feeling about AI adoption,
what's going on in your organization
within your career, takes about five to seven minutes
to participate in this survey.
We would love your input.
Go to smarterex.ai-forward-slave-survey.
Take that.
You can put your contact information in, right?
It's not required to put the contact information in, right?
No, okay.
If you want to get the report sent to you,
you can put your email in at the end,
but you can take it anonymously if you'd like.
So again, smarterex.ai-forward-slave-survey,
and that is the 2026 state of AI for business report.
We had about 1,800 people take our state of marketing
AI report last year.
This year we're expanding that outside of marketing.
We're hoping to get a lot more coverage
in other departments, other areas,
so that we can share a really good overview
of what is going on in business.
And we're gonna talk a lot about that kind of concept today.
The other thing we'll mention is also brought to us
by the Intro to AI class that I teach every month.
I've been doing this now since fall of 2021.
We are on 56, I think, right?
Yeah, we're doing a class 56 of the Intro to AI.
So we've had more than 55,000 people,
I think now at this point register for Intro to AI.
It is a 30-minute class I teach.
It's a Zoom webinar.
And then I usually spend about 25 or 30 minutes on Q&A.
And we often get dozens of questions.
And then we turn the unanswered questions
into special episodes of this podcast.
So you can go to academy.smarterex.ai-forward-slave-courses
and then you scroll down to the free classes
or in the navigation, you can actually click on courses
and go to free classes to simple ways.
We will also put a direct link to the landing page
to register in the show notes.
So if you wanna go to the show notes
and click on that, that'll take you right to the landing page.
So get academy.smarterex.ai-forward-slave-courses.
If you don't personally need the Intro to AI,
share it with people on your team who do.
It is like the first thing we tell people
when they're like, well, how do we get started?
I've got a bunch of coworkers
who don't really understand it yet.
We aren't really feeling the sense of urgency.
We always say just send them to the free class.
Like we do it every month.
So that's a great thing.
So that is happening on March 12th, right Mike?
Yeah.
Thursday, March 12th at noon, Eastern time
will be the free intro to AI class, number 56.
Okay, so the pulse survey, we talk about this every week.
Looks like we had 91 responses.
So we always say this is an informal poll of our listeners.
This is not meant to be formal research.
It's just taking a pulse of how people
that listen to this podcast are reacting
to different topics we talk about each week.
So last week, we said, where do you stand
on the Anthropic Pentagon dispute over AI safety red lines?
62% said Anthropic is right to hold the line
on autonomous weapons and mass surveillance,
even at the cost of government contracts
is what the control question was.
We've got kind of gets cut off
in the Google form of summaries.
So pretty, pretty strong majority
then we had about 18% say red lines are reasonable
but Anthropic should have negotiated more.
Negotiate more quietly too.
That people are kind of like the big fuss of it
was more the issue than the actual which maybe we just
didn't do a good job explaining.
They were negotiating four months.
Like this was not something that just happened
over a three day period.
The government gave the three day mandate.
And then 16.5% said this is primarily a political power play
and not a genuine safety debate.
That's interesting.
OK, and then the second one was block cut nearly half.
It's workforce this past week and named AI as the reason
what's your reaction.
44% say the layoffs are real but the pace will be slower
than the headlines suggest.
And then 29% said it's mostly a correction
from Pandemic over hiring.
AI is a convenient narrative.
And 25% said this is the beginning of a major wave
of AI driven layoffs across the industry.
So when you combine the 44% and the 25%
Mike, we've got this is the beginning of a major wave
and the layoffs are real.
So that would tell me sentiment is, yeah,
there might be some of that pandemic stuff mixed in there
and just some over hiring.
But our informal poll would say people are leaning
in the direction of this is getting real.
And I'm going to suggest maybe after our conversations today,
people may feel even more in that direction.
Yeah, OK, Mike.
So we've got, we're actually going to hit on the Anthropic
versus the government to start off.
Just give a recap of where that's at.
And then we're going to get into some of the AI adoption
and some of the maybe reality that's
starting to come to life around the AI impact
on jobs and the economy.
Yeah, Paul.
So we are in Anthropic versus the US government.
Round two here.
So last week we had covered kind of this initial unprecedented
confrontation between Anthropic and specifically
the Pentagon, Secretary of War Pegset
that had given an ultimatum to the company in Dario Amade
it refused to allow Claude to be used
for mass domestic surveillance or fully autonomous weapons.
And this week the situation escalated even further.
So on March 4th, kind of after we had covered this,
the Department of War made good on its threats.
They sent Anthropic a formal letter officially
designating the company as a supply chain risk.
This makes Anthropic the first American company ever
to receive a label normally reserved for foreign adversaries.
So federal agencies moved quickly to comply
with this new distinction.
The Treasury State Department and HHS
all announced they're ending their use of Anthropic products.
But here's kind of the crazy part.
Even as the government is moving to Blacklist Anthropic,
the US military is still actively using Claude in combat.
Reports have shown that Claude
is powering Palantir's Maven Smart System,
which was used to identify over 1,000 targets
in the first 24 hours of the now ongoing operations
in Iran.
Interesting.
Including quick note, the possibility that Claude may have
been involved in the selection of a target
that ended up killing 150 children in a school
that was next to enable facility in Iran.
So yes.
And we actually got some more background,
at least from one perspective, on how we got here.
So Pentagon Undersecretary Emil Michael actually
went on the all-in podcast and talked
about how after the US military operation that
captured Venezuelan dictator Nicolomodoro in January,
and Anthropic had contacted Palantir
to ask whether Claude had been used in that operation.
We alluded to that in our initial overview of these events.
But Michael actually said on the podcast
this caused what they called a massive woe moment
at the Pentagon, where they suddenly
realized they were completely dependent on a single AI
provider that might suddenly shut off access mid-flight
due to a guardrail or ethical objection, leaving
warfighters stranded.
This drama then started spilling even more
into the public eye because there was a leaked 1,600 word
internal memo from Dario Amade that got leaked to the press.
And in it, he torched open AI.
Specifically, they had this replacement
deal with the Pentagon that we also covered last week.
He said it was 80% safety theater.
He accused opening eye of quote straight up lies
and claimed Anthropic was being punished for not
giving quote dictator style praise to President Trump.
Amade later issued a public apology for the tone in the memo.
He said it was written down a very chaotic day
and didn't reflect his considered views.
But the fallout for opening eye is also bad here.
Sam Altman had to face furious employees
at an all hands meeting after this deal
admitting the company's hasty Pentagon deal
looked quote opportunistic and sloppy.
In fact, they actually lost someone already,
Caitlin Kalinowski, who worked on robotics at Open AI
and publicly resigned over the deal
stating that surveillance and lethal autonomy
are lines that deserve more deliberation than they got.
So where we stand now is basically Anthropic says
it has no choice but to challenge
the supply chain risk designation in court,
yet they still, at the moment,
offered to continue providing clawed to the military
at nominal cost during the transition
to ensure frontline warfighters aren't deprived
of the tools they're currently using in combat.
So Paul, this is just getting messier.
Both the CEOs end up for different reasons
looking pretty bad publicly.
Like where do you think this stuff actually lands,
especially amidst literally an ongoing war using clawed?
Yeah, I don't know, I said last week
on episode 200 that I thought they've,
at some point, like find a common ground
it obviously got messier.
You know, the posts, the internal posts from Amadeid
didn't probably help much.
Yeah.
You know, at a time when they probably needed
to try and tone it down a little bit
that got out and that's not gonna have that effect right now.
I obviously, you know, Sam sort of came out
regretting how opportunistic everything looked
on their end and sort of apologized internally for that
and said they probably could have handled that better.
So no, it's just a high pressure environment.
It doesn't help that there's so much
active military conflict slash war being pursued
at the moment by the US government
where these systems are being used
and they can't just flip a switch.
I mean, the alternative from reports right now
is that the only thing open AI has
the government can use is GPT 4.1,
which doesn't even have reasoning capabilities.
So like, well, good is that.
So from everything we're seeing, now obviously
there's all kinds of things that can be happening
behind the scenes that we're not, you know, made aware of.
But the only model capable of being used
in classified settings is Claude
and you don't just like flip a switch and change that.
So it's just very bizarre to me
that the government has taken the stance
they're taking while probably still negotiating
back channels to make this all work out.
You know, I thought it was interesting.
Amadek called out the fact that I alluded to last week
on an episode 200 that they also didn't give money
to Trump and that that was probably part of the reason
why they don't like Anthropic.
So I pointed out that Greg Brockman,
who was the president of OpenAI and his wife Anna
are massive givers to Trump
and supporters of a super PAC design
for the Republican Party.
So there's definitely just elements of politics,
like lots and lots of politics involved here.
The cloud companies had to come out
and clarify for their customers
because if you think about it like AWS and Google
and even Microsoft now allow access to Claude
through their clouds.
And so they had to come out and say,
listen, you still as a customer of our cloud services
still have access to Claude.
This is only specifically for these government instances
where you know, we're not allowed to use it
because there was a concern that it was a domino effect
like once the government identifies as a supply chain risk,
is it a broad supply chain risk
like no one can work with Anthropic
or is it narrow specific to the use case
and based on Anthropic's own reading of the law
and it sounds like the cloud company's readings
of the law, it is narrow,
meaning it's only for that specific use.
So some of the things I'm watching for moving forward
is do we see more AI researchers on the move?
You mentioned the head of robotics at OpenAI
who has left it seems like for moral reasons than anything.
Whether or not the government in Anthropic strike a deal
despite the bluster and the egos
and all the stuff going on,
like there's probably still a deal to be had
even that what was his name, Michael?
I mean, I'm like going on that all in podcasts.
Yeah, so I mean some of the stuff he was saying was BS
like there was obviously like some political messaging
and what he was saying, but other outside of that,
like you get it.
It's like, okay, no, I understand.
You wouldn't be dependent about a single vendor
for any reason if you're in the government.
So a lot of what he said made sense
then there was some just political stuff mixed in there.
But he even said like, hey, I'm a deal maker.
Like at the end of the day,
like I just, I wanna make this work
and if that means we do a deal with Anthropic fine.
Yeah.
So I do think both sides are very open to this.
I think it's gonna be interesting to see how quickly
these other labs are able to step up.
I still have heard nothing from Google.
If I may have missed a statement from Demis or Sundar or somebody,
but I haven't seen anything from Google,
which tells me they're probably working behind the scenes
to like do something with them.
If they don't already have something in place with them,
I think they're a more likely player than open in terms
of their ability to like scale up and do something like this.
So I don't know.
I mean, it'll be interesting to see what happens
with these other labs and how quickly they fill the gap.
Again, my guess if I was like a betting man here
and I was like playing on polymarket or something,
I'm gonna imagine they do a deal with Anthropic,
both save face in some way.
They find an offer amp to defuse this
and Anthropic keeps delivering what it does
and the government gets deals in place with the other labs
so they have backups and redundancies
that avoids the concerns they have.
And then the other thing Mike,
that I'm more and more starting to pay attention
because we're actually starting to see some data
on this public sentiment on AI.
So there was an NBC news poll.
I think this was over the weekend I saw this
where they talked to a thousand registered voters
where they're looking at like ideology over electability.
So it was like a prelude to the midterms
and they're trying to start to figure out
like where the party's gonna fall
in terms of candidates who are more ideological
versus more electable.
And they, so it was a lot of politics in it,
but then they asked the question, it said,
so this were mostly done through phone interviews.
So this is the interviewer saying,
okay, now I'm gonna give you a few names of several terms,
public figures, places and groups.
I'd like you to rate your feelings toward each one
on a very positive, somewhat positive, neutral,
somewhat negative or very negative.
If you don't know the name,
then just say so and indicate as such.
So then they go through a bunch of things
and they say, how do you feel about this, about this?
So one of them is AI, that is artificial intelligence.
That's not a phrase it.
It scored a negative 20, meaning the negative sentiment
was 20 points higher than the positive sentiment.
Now in context, we don't know negative 20's bad,
well, I'm gonna give you a little bit more context.
So the Pope scores a plus 34.
So I assume these are pretty balanced
between Democrat and Republican interviews.
Although some of the data would tell me
it was probably more Republican people
they were talking to.
Because Stephen Cobair is the only other thing
that had a positive rating at plus 10.
So apparently everybody loves Stephen Cobair.
And then if we go down the list,
I'll just hand pick a few other ones to give you some sense.
So Marco Rubio, negative seven, Trump, negative 12,
the Republican Party, negative 14,
ICE, negative 18, AI, negative 20.
The only things that scored lower than AI
are the Democratic Party at negative 22
and I ran at negative 53.
Wow.
So public sentiment for AI from 1,000 people interviewed
with a plus minus 3% margin of error
says that generally speaking, the public does not like AI.
So, and the other data would go back from multiple polls
they do this every like, it looks like five or six months.
This was the first time this poll specifically
showed sentiment towards AI, which tells me
they're obviously now trying to gauge that,
as we talk about many times on the show,
the politicians are trying to figure out
if AI can play a role in moving votes in the midterms.
And so this is a poll that tells you like,
okay, people don't like it,
but what do they like about it starts to become the thing.
So that's the kind of stuff I'm watching for.
Again, I think the Anthropic deal at some point gets done.
They find the off ramps and they save face
and they make it work on both ends.
But I don't know, I mean it's,
we'll see, see if this week brings a little bit
more of toning down versus the ramping up the animosity.
Yeah, and we had talked about last week
an episode 200, the polling around data centers
that was showing this big swing towards negative sentiment
around data centers.
I wonder if we are seeing the beginnings
of this massively negative narrative wave
for AI sentiment and opinion.
I honestly think it's gonna be really hard to avoid.
Like I'm getting, if I was trying to prognosticate
a little bit, I think the benefits of AI
are gonna be harder to convey,
and they're gonna take longer to realize
like scientific discovery as an example,
where the negative effects of AI are going to be very obvious
and if they are connected to job loss
as I would fully expect, it is felt by everyone.
Like everyone starts to know someone who has lost
the job because of AI and that like all the positive stuff
that we hope will come, the abundance
that you hope from comes from AI,
it's not gonna come as fast as the negative stuff.
Well, somewhat related, let's talk about this
because in our second big topic this week,
we have some new research and new work
from anthropic around AI and jobs.
So this week anthropic published this big new study
that puts some data behind these kind of warnings
that AI is coming for white collar jobs.
So anthropic researchers as part of a study
created this new metric that they call observed exposure.
So instead of just guessing what AI can do in different jobs,
they actually compared theoretical LLM capabilities,
large language model capabilities against real world,
anonymized usage data from Claude
to actually see what tasks people are automating right now
in white collar work.
And the biggest takeaway here is there's this massive gap
right now between theory and reality.
So for example, they find that AI can theoretically handle
94% of the tasks done by knowledge workers.
But in practice, Claude is only covering 33% of those tasks.
So as this gap however inevitably closes,
the demographics of who is going to be hit hardest
from what they found are striking.
So they find that the most AI exposed workers
are not things like physical laborers.
They are highly educated and well paid.
For instance, workers in the most exposed jobs
according to anthropic analysis actually earn 47% more
on average than your average worker.
They're 16 percentage points more likely to be female.
And they're more nearly four times as likely
to hold a graduate degree compared to unexposed workers.
So they find the top three most exposed occupations
right now are computer programmers.
75% of their tasks are covered by AI,
customer service reps, and data entry keyers.
Meanwhile, about 30% of workers,
things like cooks, mechanics, and lifeguards have zero exposure.
So I guess the good news here is that the researchers
so far found no systematic increase yet.
In unemployment for the highly exposed white collar workers
since chat GPT launched in 2022.
But they are seeing some early warning signs
for Gen Z specifically.
While companies are not doing massive layoffs yet,
they have severely slowed down entry level hiring.
And the job finding rate for young workers
defined as ages 22 to 25, entering these exposed fields
that anthropic was analyzing has dropped by roughly 14%
compared to 2022 levels.
So Pawn, curious, what's the thing here
you think people kind of need to sit with?
This seems trend-wise to be hitting on some of the things
we've discussed in the past in other studies,
but we know right now there's this kind of big gap between
what is possible and what's actually happening
when it comes to AI exposure or jobs being exposed to AI.
I think just the fact that the labs are now looking at this,
they're realizing that these benchmarks
they've previously been looking at
that are mainly testing for IQ are saturated
and we're not gonna learn much from model to model,
based on incremental point here, point there on IQ tests.
So they have to start looking more deeply at actual work.
This is on the heels of GDPVL,
which we talked about September 2025,
opening I came out with that
where they were doing something similar.
At the time they said the GDPVL was designed
to help them track how well models
and others perform on economically valuable real world tasks
and they're looking at basically,
they're starting with the gross domestic product.
It's a key economic indicator
and then drawing tasks from occupations
in the industries that contribute most to GDP.
So I think it's extremely important
that in this six month period we now have labs
more being more realistic about where this is all going
by starting to look at the implications on real labor
and the jobs.
The paper itself, you know,
people loved the radar chart in there.
It's like easy to understand.
It shows these different professions,
shows the exposure level they have
and then how little it's currently happening.
I think in some ways it gives people
like this false sense of security,
like, okay, good, like my field is safe.
As I've said a lot recently on this podcast,
looking backwards isn't gonna tell us anywhere
about where we're going.
There's the data isn't going to show the impact yet.
You know, I think there's a lot of reasons why,
but we're starting to see more like the,
you know, the block kind of stuff
and we'll talk in the next topic a little bit more about this.
But the key is they're starting to at least look
at the exposure level.
So as we're starting to look at the careers
and the different roles and there's breaking it down
at the task level.
So they're starting to look at how are the tasks
that make up jobs going to impact this?
And again, this isn't even that new of a concept.
In 2023 when GPT-4 came out,
there was a paper on this concept
where they were looking at the own database
which breaks down like 800 professions.
It's actually really cool.
If you've never looked at it's a government database
and you can go in and you can put in a job profession
and it breaks it down into like 20 or 25 tasks
that make up that job.
And so that's in essence what they're doing here.
So when I build jobs GPT, this is exactly what I did.
So if you've never tried it,
you can go to smarterex.ai-4x-jobs-GPT.
It's just a free custom GPT.
And my goal there was actually it was building
on the OpenAI Microsoft paper from 2023.
And what I was trying to do was project out
as the models get smarter and more generally capable,
what will the exposure level be of different jobs?
And so if you go into jobs GPT
and you put a job title in, it does a task level analysis.
So it looks at what the tasks are that make up those jobs
and then it applies an exposure level.
So then I devise like an 11 element exposure key.
So an exposure key that looks at things like voice capabilities
of the models, advanced reasoning, persuasion,
digital world action or AI agents, physical world action,
robotics even gets into.
And so the whole point of when I built jobs GPT
was trying to like look at these jobs,
break them down in the tasks and then say,
what can I do today and what is it gonna be able to do
as they get smarter and become more exposed?
So that's the basic premise of what they're doing here
is they're looking at this database of all these tasks
and then trying to project out.
The observed exposure right now is coming
from the use of their APIs.
So it's an imperfect thing because it's what,
the one thing I'm excited about is in the fall
when we've talked about anthropic efforts to look at this,
it was almost exclusively being used as a coding tool.
Yeah.
And now that anthropic is becoming way more popular,
even in the last 30 days,
this kind of data looking at their API usage,
like cloud co-work and things like that,
you're gonna start to get a much better representative data set
of how the exposure is spreading.
So yeah, that was kind of my notes on that.
I think it's a good report.
It's pretty dense.
I wouldn't, it's not for everybody.
It's like the main takeaways you hit on
and that's probably sufficient for most people.
Yeah.
If you're really intrigued by this,
I would say go dig into it,
but it's pretty heavy reading.
It does allude mic to the importance
of establishing your own evals within your company,
like knowing the three to five things
that you wanna test each time a new model comes out
and having those in place.
And the other thing that I got thinking about,
so this morning I was listening to a podcast
with Peter Diamandis and he was interviewing Andrew Yang,
who we talked about, I think on episode 200
or maybe it was 199 or 198,
ran for president in 2020 on the premise
of a universal basic income, the need for it
because saw the future of work being automated.
Really good interview,
but it got me really thinking this morning
and I actually like I have to talk to you and Taylor
about this because I'm like,
we have to do more on the research front around this.
So I think about the social contract
and they were talking a lot about this premise
of the social contract and what is expected from workers.
Like I go to college, I get a good education,
I come out, I'm a good worker.
Like I should have opportunities.
Like I should have the ability to earn a living
and find fulfillment in my work.
If I fulfill my part of it, that's societies.
You know, what it contributes back to me
is that I get a job and I can raise a family
and buy a house and I can do all these things.
But what if that social contract breaks?
And AI causes that breakage of the contract
then what happens?
So I just through this actually use 5.4 thinking
which we'll talk about, GPT 5.4 thinking just came out.
And I just threw that in like a prompt around this.
I was like, like, talk to me about the social contract
related to AI and jobs and I'm just gonna read what it gave me
and this is like, because I vetted this
and I was like, yeah, this is actually like really good.
But it's saying it better than I could say it
and I had like five minutes to get this ready
before the show.
So I just wanna read this because I think this is really
important for people to understand.
And it's some, like I said,
it's somewhere we're probably gonna spend more time
talking about and more research on.
So it says the social contract
is the implicit deal between workers, employers,
and society about how work should function
and what people can reasonably expect in return.
With jobs in AI that deal gets stressed
because AI can change who does what work,
how value is created and who benefits from it.
A simple way to think about the traditional social contract
around work is people build skills and work hard,
employers provide wages, stability and opportunity,
society supports education, labor protections
and a safety net, in return, people can earn a living,
build a future and have dignity through their work.
AI raises the questions,
does that deal still hold if machines can do more
of the tasks people used to do?
The social contract debate around AI usually centers
on a few big issues.
One, if AI boosts productivity, who gets the gains?
Do the benefits go mostly the company owners
in a small number of technical workers or AI forward workers?
Or are they shared with employees
through better pay shorter weeks,
work weeks retraining and broader prosperity?
Two, what does society owe workers whose roles are changed
or displaced?
If AI automates part of someone's job
is the expectation that they simply adapt on their own
or do employers and governments have a responsibility
to help with retraining, transition support
and new pathways to work.
Three, what is the obligation of employers using AI?
Is it acceptable to use AI to cut labor costs
or should companies also use it to augment workers,
improve job quality and create new forms of value people
can participate in?
Four, what happens to fairness and dignity at work?
If AI is used to monitor workers, score performance
or make hiring and firing decisions,
people worry that the social contract breaks down
unless there is transparency, accountability
and human oversight.
And then finally, five, is a job still the main way
people access security and status.
If AI reduces the amount of human labor
needed in some areas, society may need to rethink
whether healthcare, education, retirement,
security and basic economic stability
should depend so heavily on having a traditional job.
That was actually something that came up
on the Andrew Yang interview I was mentioning.
They were talking about universal basic services,
which I hadn't really thought about.
There's always talk about let's just give people money.
Let's just pay them to live.
But what about, instead of doing that,
we had a government say, let's say you paid $250 a month
and you got healthcare, education,
like you got all that included.
So when people say the social contract around jobs
and AI, they usually mean what new obligations
should exist among companies, workers and governments
when AI changes the role of human labor.
A healthy AI era social contract might look like,
workers get training in a real chance to adapt,
companies share productivity gains more broadly.
AI is used to augment people where possible,
not just replace them,
decisions that affect livelihoods remain accountable
to humans, society, strengths and safety nets
for periods of transition,
and people retain dignity, agency and economic opportunity
even as work changes.
So it really becomes a fairness and power question
if AI creates enormous value,
what do people owe one another
so that progress benefits more than just a few?
Super, super important stuff.
We've talked about all of those elements
at different times in more isolation.
And I think on this podcast,
we probably need to start having more conversations
around this social contract related to work as a whole.
It's kind of what we set out to do
with our marketing AI industry council,
where we focused on the impact of people first
and that's the marketing talent report
that we put out last month,
where we were looking and saying,
well, what happens to the people?
And I don't only enough people are answering
that asking that question.
And I will tell you this.
And here I have done a lot of workshops.
I've done a lot of public talks.
I've spent a lot of time with executives
and entrepreneurs, government leaders.
No one wants to fire people.
Like when we talk about the inevitability
of jobs going away,
I've yet to meet a CEO who takes joy
in getting to lay off 20% of their workforce.
No one wants to do it,
but they're going to be CEO's force to do it.
Yeah.
And so I think we just have to be talking more about this.
And I don't have the solutions,
but the one thing I had floated last year
is like, is there a tax on automation?
So if you're claiming you laid off 4,000 people
because of AI, not only do you have to play unemployment,
but you have to pay an AI tax.
And I have no idea.
I've never heard anybody talk about that.
It hasn't come up from what I've heard with government.
But there's just these kinds of things.
And that's why I would suggest
if you're interested in this topic,
go, we'll put the link in the shouts.
Go listen to the Andrew Yang interview
because they talk about a lot of ideas.
And the unfortunate part is you will come away
with it realizing we're nowhere near,
actually having solutions.
But we actually do have some people
thinking deeply about this, including Andrew Yang.
Yeah, and it really seems like there is a wide open road
to plow through to your politician
by proposing some actual policies.
I don't know if they'll be good or bad,
but there's a vacuum in terms of conversation
about what we actually do about it.
And this stuff doesn't change just by thinking about it.
We have to have policies elected officials,
businesses, leaders, private, public, et cetera,
actually doing things about this.
For instance, it's actually changed.
Totally.
And the thing that, I mean, there's lots that worries me
and I'll end here, like it says,
probably topics for more conversations.
The most obvious path is AI is going to create trillionaires.
Elon Musk will be one before the end of the year,
barring any massive collapse in one of his companies.
There will be others.
And the fastest path to some sort of solution
is a version of universal basic or high income
where the trillionaires and billionaires give money
back to private citizens.
Mm-hmm.
And think about the downstream ramifications of that.
Like, okay, so now we've solved,
like we've gave some people some money,
but now the five people in the world
who are controlling AI and now control society too
because they own the media companies
and they're giving people money.
It's like those people are so above the law at that point.
Like, it's just, I don't know,
I would love to like sit around
and sit in some think tanks about this stuff.
I'm sure there's fascinating conversations happening.
I'm just not aware of them at a very deep level.
And I just at least want to know that they're happening.
Yeah, same.
All right, so next up we have an essay
published by Sequoia Capital Partner, Julian Beck.
And in it, he actually predicts
the next trillion dollar company won't be
a traditional software provider.
It will be a software company masquerading
as a services firm due to what AI now enables.
So he said previously,
AI companies built what he calls co-pilots.
They sell a tool to a professional to work with you.
But Beck argues that models are now smart enough
to act as auto pilots that sell the final work product
directly to the buyer.
So he divides work into kind of his own buckets
that he calls first intelligence,
which is complex, but rule-based tasks,
and then judgment, which is tasks
that require experience and instinct.
He notes that AI has officially crossed the threshold
to handle the first category,
the pure intelligence tasks autonomously.
So the market math behind this is really interesting.
He says that for every dollar that's spent on software,
$6 is spent on services,
and we'll get to why this matters in a second.
So for example, a company might spend $10,000
on accounting software like QuickBooks,
but then they spend $120,000 on the human accountant
to actually use it.
Beck says that the next legendary company
will just close the books for you.
And he actually maps out verticals
that are ripe for auto pilot takeover.
There is huge labor spend in a few big areas
that he mentions.
One is insurance brokerage,
which spends $140 to $200 billion a year on salaries.
Accounting spends $50 to $80 billion.
The healthcare revenue cycle spends $50 to $80 billion.
Recruitment is $200 billion plus dollar industry,
and management consulting is $3 to $400 billion as well.
So Paul, this touches on this service as software idea
that we have kind of discussed in the past
that we're not going to replace the tam of software companies.
We're going to replace the tam of available salaries.
Yeah, so we talked a lot about this.
I think it's on the SaaS apocalypse episode.
Might have been when we touched on this a little bit,
and then it's come up a lot.
Anyway, the basic premise is software industry in the US
is about $300 to $500 billion, roughly,
in terms of revenue annually.
Annual wages for knowledge workers,
people who use computers for living is $4 to $5 trillion.
So the knowledge work labor market
is literally 10X the software revenue industry.
So it's always been an inevitability that that was what
Silicon Valley would build towards.
It's what VCs would fund is companies
that went after the much larger total addressable market.
So I'm going to, I'll read a few excerpts from here,
Mike, some of this is just reinforcing what you said,
but I think these are really important points.
So the article said every founder building an AI tool
is asking the same question,
what happens when the next version of Claude
makes my product a feature?
They're right to worry if you sell a tool,
you're in a race against the model.
But if you sell the work, every improvement in the model
makes your service faster, cheaper, and harder to compete with.
So the example you used about writing code
is mostly intelligence knowing what to build next
is judgment.
Judgment is different.
It requires experience and taste.
Instinct built over years of practice,
deciding which feature to build next,
whether to take on tech debt, when to ship before it's ready.
He's the example of cursory said users to cursors
are coding agent if you're not familiar with cursor.
Users treated AI as auto-complete.
Today more tasks are started by agents than by humans.
Software engineering accounts for over half of all
AI tool usage across professions.
Every other category is still in single digits.
The reason is that software engineering
is primarily intelligence work.
AI has crossed the threshold where it can do most
of the intelligence work autonomously
and leave the judgment to humans.
Software engineering got their first.
It is coming for every single profession.
And they actually had a chart.
It said in what domains are AI agents deployed.
Software engineering is 49.7%.
The next closest is back office automation at 9%.
Other has 7% but then to give you a sense
of some of the other ones he's looking at.
Marketing and cop, you're writing 4%.
Sales and CRM, 4% finance and accounting,
4% academic research, 2.8% cybersecurity,
customer service and goes on and on.
So he's kind of giving a sense
and then he gets into this idea of co-pilot,
sells the tool, autopilot, sells the work.
So again, if you think about Microsoft
and all those positioning, it says co-pilot.
It's like you're going to work with the thing
and it's going to do the work with you.
Three to five years from now,
that'll be a very misleading projection
of what the future would look like.
They're not going to be co-pilot.
It is going to be a lot of autopilots
as the reality of where knowledge work goes.
So he said today's judgment will become tomorrow's intelligence.
As AI systems accumulate proprietary data,
this is a very important concept understand.
As AI systems accumulate proprietary data
about what good judgment looks like in their domain,
the frontier will shift.
What that means is right now you need co-pilots
that work with the humans
because the humans still has the experience,
the expertise, the judgment, the taste
that knows what to do next.
What he's saying is once the models get enough training
in specific domains, then they get judgment too.
And maybe we don't need the human to have the judgment anymore
and now it can become an autopilot.
Again, I always go back to the example
of full self driving in a Tesla.
Most of the time it's just, it's a co-pilot
because a lot of times like when I'm driving my Tesla,
driving home from the airport Thursday night,
it nails a pothole.
Didn't see it.
So I had it on full self driving coming down a road
that directly from the airport to my house
and it drills a pothole.
I was paying attention.
I didn't see the pothole, but because it was dark out,
but that's an instance where the thing doesn't know
what it doesn't know.
And sometimes the human judgment has to come in and say,
hey, this road is tore up because of all the snow plows.
I'm going to actually maintain control while I'm going.
That was my mistake is I should have just maintained control
myself and I could have swerved and not hit it.
That's the premise, but at some point,
maybe the car gets better than me at judging.
It recognizes that there's potholes everywhere in it.
It starts to kind of control that environment itself.
So that's the basic premise is right now
we are largely in the co-pilot phase,
but coding and software engineering
is moving much faster towards autopilot.
And at some point, the rest of these professions
start to follow the co-pilot to autopilot transition.
So he said the total addressable market for autopilot
is all labor spend in the category,
in-sourced and outsourced combined,
but the right place to start is we're outsourcing already exists.
So this again is a really important concept to think about.
If a task is already outsourced,
it tells you three things.
One, the company has accepted that this work can be done
externally, so they're willing to outsource it.
Two, there's an existing budget line
that can be substituted cleanly.
Three, the buyer is already purchasing an outcome,
replacing an outsourcing contract
with an AI native services provider is a vendor swap,
replacing headcount is a reord.
The fourth mic I would add there is,
you don't have to fire anybody.
So if you're already outsourcing it,
then the best thing you can do is get rid of that.
Now that's not great if you're the company
providing the outsourced services,
but it at least buys you time
to not have to lay a bunch of people off.
So outsourcing, so think about your company,
think about what you currently outsource,
that is the starting point,
as it shifts to where it can do the intelligence
and the judgment.
So I said the playbook company should start
with the outsourced, intelligently heavy,
task, nail distribution, expand toward the in-sourced,
judgment-heavy work as the AI compounds,
the outsourced task is the wedge,
the in-sourced work is the long-term total addressable market,
plotting every services vertical
on an intelligence to judgment spectrum,
an outsourced to in-sourced ratio
produces a priority map with labor,
total addressable markets and brackets,
the list is illustrative.
So that's referring to a specific thing,
which I would recommend go look at the chart.
It is illuminating.
And then I'll just mention a couple of things
Mike to zoom into the ones you mentioned,
because it gives a little context
as a why insurance brokerage, for example, is a major one.
So it says insurance brokerage is 140 to 200 billion.
The largest dollar market on the list,
standard commercial lines are highly standardized.
The broker's value at is essentially shopping
across carriers and filling forms
that is pure intelligence work.
The distribution layer is incredibly fragmented,
tens of thousands of small brokers
each running the same process
so no single incumbent controls
the customer relationship.
Accounting and auditing is another one,
50 to 80 billion outsourced in the US alone.
The US has lost roughly 340,000 accountants
over five years while demand has grown.
75% of CPAs are nearing retirement.
The licensing path is long
and starting salaries, lag, tech, and finance.
That structural shortage is pushing firms
to accept AI faster than almost any other profession.
Let's see, I didn't know I went legal.
So legal and transaction work to 20 to 25 billion,
contract drafting, NDAs, regulatory filings,
high intelligence, routinely outsourced.
The work product is standardized enough
that quality is verifiable
so the buyer can trust AI output
without deep legal expertise.
And then one final note, actually,
let me do the management consulting one
and then I'll add one more note.
So management consulting, three to 400 billion,
huge market, but the work is mostly judgment.
The interesting question is whether AI
can disaggregate consulting into intelligence components,
data gathering, benchmarking, et cetera,
and judgment components like strategic recommendations
with the intelligence layer getting automated
and the judgment layer staying human.
And then this tweet was from Sunday, the Kobayashi,
how do you say that, Mike?
Kobayashi.
Kobayashi, I think, letter, yeah.
So put a link in, here we go.
Finance related job openings are collapsing.
Finance and insurance job openings
fell 117,000 in December to 134,000,
the lowest level since February 2012.
Available vacancies in these sectors have dropped
by 410,000 or minus 75% since the 2022 peak.
Openings are now even lower than the 2001 recession bottom.
By comparison, the largest monthly decline
during the 2008 financial crisis was 125,000.
As a result, the finance and insurance job openings
rate fell to 1.9%, meaning fewer than two
out of every 100 jobs in the sector
are currently vacant, the lowest since February 2010.
Excluding the 2009, 2010 lows,
this is the lowest rate recorded this century.
The finance industry is bracing for more layoffs.
Wow.
I love how he breaks this down,
but when you start seeing it broken down like this,
it almost becomes obvious.
It's like a no-brainer that this is where things would start
to go, which is a bit scary.
Yeah, and we've talked about the accounting one
on the show before, and I actually did some,
and it's like consulting, I did some working executive
sessions for some major accounting firms,
and this is the thing I had illuminated for them
was they'd lost this 300 to 400,000 CPAs during the pandemic.
And so my point to them was like,
okay, you're at a deficit of talent.
That just accelerates someone building AI to do that job.
And as soon as you fill the gap of those three to 400,000,
you've now automated the need for anybody else.
So you're just gonna, you need the AI,
but as soon as you train the AI on that domain,
now you don't need the humans that were left.
It's a catch-22 almost, like again,
like our whole point here isn't to take a position on any of this.
It's to illuminate the reality of where this goes.
Like, and yes, there are things that can change this,
but more and more, it's like this inevitability of like,
it's going to happen, and how quickly and what we do about it
is like the things we have to really start thinking about
and to be more proactive about it.
Absolutely.
All right, Paul, before we dive into this week's rapid-fire,
just a quick announcement that this episode is also brought to you
by our upcoming AI for CMOS Blueprint webinar.
This is a webinar unveiling our upcoming AI for CMOS Blueprint,
which is an asset we're creating in partnership with Google Cloud.
And the webinar unveiling it is happening Thursday,
March 26th at 12 p.m. Eastern, 9 a.m. Pacific.
And in the session, myself and a Smarter X CMO Kathy McPhillips
are going to actually break down the insights from this report,
which contains real-world use cases tools and advice
for how CMOS can adopt AI.
We'll also do some discussion and live Q&A.
Registration is free.
All registrants will receive un-gated access
to the full AI for CMOS Blueprint to register
code to smarterx.ai forward slash webinars.
All right, so let's dive into this week's rapid-fire.
So first up, we have a post from Wharton Professor Ethan Mollick
that is highlighting the dynamic that,
while AI models are advancing faster than ever,
there's a massive adoption divide in the corporate world.
So he wrote, on next this week, quote,
it is amazing how many companies I talk to still
have AI effectively blocked by IT and legal departments
for out-of-date reasons.
When many companies in highly regulated industry
has figured out ways to deploy enterprise chat
at GPT, Claude, and Gemini without any apparent problem,
it is one of the weirdest divides I speak to two companies
in the exact same industry.
And one has been using AI for the past 18 months,
the other has a committee that has to approve every use case
individually and talk about how AI companies will train
on their data, the deciding factors
whether an executive is willing to assume risk.
If the answer is no, then risk reduction forces
in the organization, things like IT and legal among them,
have every incentive to avoid anything
that might even be rumored to cause a problem.
It is a leadership question.
So Paul, I have to say reading that that certainly hits home
to me with a few of the conversations
we've had with companies and things we've seen.
Does that align with what you're hearing today,
that there's kind of this diffusion or adoption gap
based on internal, let's call it leadership issues,
bureaucracy, et cetera?
There was an article in the Wall Street Journal,
and it was end of last week over the weekend
that was AI needs management consultants after all.
And this was the basic premise.
It's like why opening I just announced
their frontier alliance and thropics doing deals
with these consulting firms.
They need the people that have the trusted relationships
with these enterprises to get in there
and show them how to use the platforms.
So I mean, we're definitely seeing that.
And then this was what I ended up writing,
the editorial in my exact AI insider newsletter
that I do on Sundays.
This editorial, I was like,
I just have to address this.
So I'll just read this.
If you get the newsletter, you've read this already,
but it's the most concise way I can say it.
So I'll just read what I wrote for Sunday.
Said in March 2023, two weeks before the release
of GPT-4, I published the Law of Uneven AI Distribution,
which stated the value you gain from AI
and how quickly and consistently that value is realized
is directly proportional to your understanding of access
to an acceptance of the technology.
In the post, I wrote, so this is what I wrote in March 2023.
I've been thinking a lot lately about AI adoption curve
both in our personal and professional lives
and who stands to benefit most from rapid advancements
in the technology.
Where I've landed is that the impact and benefits of AI
will be unevenly distributed to individuals, companies,
and industries.
In some cases, it will be by your own choice
and in others, it will be by institutional design.
For example, financial services companies
blocking employees' chat GPT access,
educational systems at all grade levels,
struggling to adapt to generative AI capabilities
in curriculum, and my own willingness
to install some super interesting AI apps
because I don't know or trust the companies
and how they'll use my data.
This uneven distribution will create dramatic differences
in people's experiences with and perceptions about AI.
It will profoundly impact how much you reap the benefits
of AI in your personal and professional lives,
how much value your company extracts from the technology
and the trajectory of your AI journey.
So that I continued in the newsletter on Sunday,
said here we are three years later
and the law continues to hold true.
In the last two months alone, I've spent time
with leaders of major financial and healthcare institutions
that are still blocking access
to generative AI platforms in their companies.
And I've met with school administrators
at high school and college levels
who are searching for answers
on how to handle AI in the classroom.
And personally, I have yet to go down the path
of exploring open claw or even clawed co-worked.
And open claw is an open source AI agent
that runs locally on your computer
to perform real world tasks
because of the security and unknown risk.
So I'm not accepting of the risk that I have to give up,
like what I have to give up to get the benefit.
So for companies struggling to see ROI from AI,
you have to go back to the basics,
AI understanding and access.
The companies that are racing forward
and realizing the benefits of AI
are optimizing AI literacy and empowering their employees
to integrate generative AI solutions into their workflows.
But one of the things I see Mike all the time
is they're not starting at the top.
Like the AI literacy is kind of from the bottom up,
whereas it actually needs to be happening at the bottom,
but it has to also happen from a leadership level
because they won't give the priority and urgency
needed to AI transformation
if they don't understand AI capabilities themselves.
So you have to focus on that,
you have to start with those fundamental things
or else we're just gonna be in this continual cycle
of here we are three years later
and those same three principles of lack of understanding,
lack of access, lack of willingness to give up
or accept the risks are still preventing companies
from doing anything with AI.
Yeah, and like we talked about in a couple past episodes
as we see these predictions from people, right?
Like Microsoft CEO of AI, Mustafa Suleiman,
saying like, hey, in 18 months,
all the way color works going to be automated.
Like look at these timelines,
the reality of a lot messier and more uncertain
and more nuanced.
I have 100% been with companies of late
that if they even have rolled out Gen AI in its current form
to all their employees in 18 months, I will be shocked.
Like it's, yeah.
People underestimate human friction.
I mean, it is a massive barrier
to doing this the right way in companies,
especially the big enterprise.
And to your point in the previous topic about
that's why some of these outsourced things
that are already being off your plate
and hiring someone to do are really natural starting point
for a lot of this adoption.
All right, next up open AI, you know,
have some chaos obviously with the whole Pentagon
and anthropic situation,
but in the midst of all this,
they actually dropped a massive product update.
They released GPT 5.4.
It was released on March 5th in three variants.
There's a standard version, a thinking version
for reasoning and a pro version for maximum performance.
Open AI is positioning this
as its most capable model ever for professional work.
It is also taking a direct shot
in anthropic enterprise customer base
by launching new native integrations for things like Excel.
The benchmark jumps on this model appear to be huge.
It is the first general purpose AI model
to natively surpass human performance
on computer use tasks.
So on the benchmark OS world verified,
it scored 75% at which blows past the human baseline
of 72.4% and crushes GPT 5.2 score of 47.3%.
On open AI is relatively new benchmark called GDPVAL,
which we referenced in a previous topic.
This test knowledge work across 44 different occupations,
GPT 5.4 match or exceeded industry professionals
83% of the time.
It also saw its abstract reasoning score
on the ARC AGI two benchmark jumped to 73.3%.
That's up from 52.9%.
And it is the first general reasoning model
rated high capability in cybersecurity
according to Open AI.
Under the hood, it features a massive million token
context window and introduces a new tool search feature
that dramatically reduces token usage by 47%
by dynamically looking up tool definitions.
So Paul Open AI is released GPT 5.4 right in the middle
of kind of their controversy in chaos
like the Pentagon, his benchmarks are pretty impressive.
Early tests, people seem to really enjoy the model
myself included.
What stood out to you about what the model can do
and where we're at with GPT 5?
GDPVAL number is pretty shocking.
So yeah, match or exceeded industry professionals
83% of the time.
Yeah, I just know when we can throw all this data out
and it's hard to understand the significance of that.
And again, think about how fast this is all happening.
GPT 4 came out in March, almost three years ago to the week.
I think it was middle late March, 2023.
So we're doing about three years.
I don't know what it would have scored at GPT 4 level,
but it would have been the single digits
if they even had these benchmarks back then.
Right, right.
So I mean, again, I think so many companies
have a lack of urgency around this.
And if you just look at a three year trend line,
we are talking about exponentials
in terms of these models' capabilities
to do the work that your people do.
And I don't, I just don't know like at what point
people are going to accept all this and act on it.
Like, I don't know what the data point is
that people need to see or like,
maybe it's that their company does the 10, 20% lay off
and then they realize this is real.
Yeah.
Or they're asked to have a 10% contingency
for June of this year.
And now like the CMO is like, oh my gosh,
I didn't know this was actually happening to my come.
I don't know what it is, but like, it's real.
I don't know how to say this.
Right.
And again, I know we're preaching to the choir
with listeners to our show.
Like you are all in theory that you have proactively chosen
to listen to a show about artificial intelligence.
You are probably in the know on this.
And I'm just going to tell you like,
most people aren't there with you.
All right.
So whatever you can do, whatever data point you need to show
your peers, your friends, your leaders show them.
Like we've got to move people.
These things are getting really good, really fast.
And it's not just going to be happening
computer programmers this year.
It's going to start happening to everybody else.
And yeah, I don't know.
I've only tested on a few things,
but I had one very specific high value use case
on Friday that I was actually doing.
And my personal experience lately is,
I've mentioned this on the show,
but if it's a high value strategy thing,
I normally will test at least three different models.
So I'll have Gemini, I'll have Claude,
and then I'll have chat GPT.
But then I'll even test variations of those models.
So I can chat GPT, I may do 5.4 thinking,
and then I'll do 5.4 pro.
I'll do it with and without my co-CEO GPT.
So I'll experiment with all these different ways
to try and get it a massive gain in terms of the value.
And the one project I did with 5.4 was extremely impressive,
but I didn't have time to run it against Claude 4.6
and compare it yet.
But it was impressive.
And it's, I will say it's a task that I would otherwise
be paying tens of thousands of dollars for Friday.
And it did it very well,
but outsourced tens of thousands of dollars for it.
It did it in about three minutes
while I was picking up my kids.
Yeah, yeah, that's a bit eye-opening.
We'll repeat our weekly reminder
till we're blue in the face.
If you don't have a paid edition of these accounts,
go get one and go test these out for yourself.
You don't need to read the benchmarks,
or hear the data, or look at the charts.
Just go use the tool, stop what you're doing,
pause this part.
Just don't go try it out.
I get it every week.
I still get something like, oh, I tried this thing
and it just didn't do it.
I was like, was it the thinking version?
Did you use the, like what model did you use?
It changes things when you use the thinking versions.
Yup.
All right, next up, less than a year ago,
Polish mathematician Bartosz Naskrekki
was a vocal AI skeptic.
He dismissed the technology as a very advanced calculator
that could not understand deep mathematics.
Today, however, he has declared that his quote,
personal singularity has arrived.
So the catalyst for this change in opinion
was something called frontier math.
This is an incredibly difficult benchmark created
by company called Epoch AI,
to test models on research level mathematics.
Naskrekki was one of 30 global experts invited
to contribute to this benchmark.
He designed the hardest problem tier possible.
They call it a tier four problem,
based on 15 to 20 years of his own accumulated research.
This problem was a beast.
It featured a documented 13 page solution,
required an answer that was a massive number
to prevent lucky guessing.
You can't just stumble on the answer.
And it was specifically designed
so that a PhD level mathematician would need
at least a month just to figure out an approach.
But during a recent evaluation,
the OpenAI's newly released GPT 5.4 Pro
solved the problem.
It became the first AI system to ever crack one of these problems.
And what shocked Naskrekki wasn't just that the AI solved it,
but rather how it solved it.
He didn't brute force its way to a correct answer.
It successfully extrapolated a pattern
to bypass more advanced mathematical machinery.
Naskrekki called the solution, quote,
very nice, clean and feels almost human.
He compared the breakthrough to something
should sound familiar to listeners of this show
to AlphaGo's famous move 37.
A moment where machine didn't just win,
but demonstrated genuine creative insight.
This is not just an isolated incident either.
This represents a pretty big acceleration
in AI's reasoning capabilities.
In mid 2025, only three frontier mass tier four problems
had ever been solved.
Today, 42% of those brutal problems have been cracked at least once.
And rather than feeling displaced by AI, Naskrekki
actually said he's kind of embracing it now as a collaborator.
He put it like this.
He said, quote, at least I've gained a tool
that understands my idea on par with the top experts in the field.
My singularity has just happened,
and there is life on the other side,
off to infinity.
That's a pretty powerful moment, Paul.
And I realize not most of us are not world-class mathematicians,
but significant that someone of this caliber in this field
is now doing it about face and what's possible with AI.
Yeah, I like seeing this at least as a response to it.
Just as a reminder, if people haven't watched yet,
the my keynote from our make on event in 2025
is available on YouTube.
You can watch the whole thing.
And it was the move 37 moment for knowledge workers.
So this is something I've thought a lot about.
And I tend to focus on the human element of it.
So there's the technological capability part of it.
But the human element I defined
is the moment when you realize AI is better than you at what you do.
So for many people, it starts with individual tasks.
Like, oh, it writes abstracts better than I do.
And there's just that one thing.
But then at some point it starts to add up
and you realize it's just kind of generally better at what I do.
And I've got to figure out what else I'm going to do
where I'm going to find my fulfillment.
So my talk was a very optimistic look,
but it was also the fact that we were all going to have
these moments as writers, as consultants,
as lawyers, whatever, where it's just as good or better than you
at the core thing you've done.
And that's a weird moment that we have to prepare ourselves for.
So I always like highlighting examples of it,
especially when there's an optimistic view of I had it.
And life's still OK.
Like I'm still going to go on and solve other problems
and do other things.
And especially in these kinds of fields
when we're talking science and mathematics and research,
AI being able to crack the code on some of these problems
starts to hint at the fact AI may start
discovering new knowledge for us and doing new maths
and the kind of alludes to that and some of what
he talked about as well.
Yeah, I mean, the most fundamental level
that the universe is mathematics.
Right.
Everything can be broken down into mathematics.
And so if AI is really good at mathematics,
it actually bodes well to scientific discovery
and expansion of our human knowledge
and understanding of the universe
and kind of a meta way, like it's really exciting.
Yeah.
Not meta the company, by the way.
I don't know if this is something that we will talk about.
So in our next topic, tension over AI and journalism
has reached another boiling point this week,
highlighting a massive divide between media executives
pushing the technology and rank and file
reporters dealing with its realities.
So over at the Associated Press, Amy Reinhart,
who is the senior product manager for AI,
sparked a firestorm in an internal Slack channel
when she told staff that regarding AI in the newsroom,
quote, resistance is futile.
Referencing a recent editorial from Cleveland.com
about their use of AI, which, by the way,
we covered on episode 198, Reinhart suggested reporters
should just be in the business of gathering quotes
and let LLMs actually write the stories.
She actually claimed that many editors would actually
prefer an AI written article over a human written one.
Understandably, the pushback was a bit fierce here.
One AP reporter called the comments, quote,
insulting an abhorrent, defending human writing
over AI written slop.
Another noted that AI managers seem to live in a quote
totally different reality than working journalists.
The AP itself actually officially distanced itself
from her remarks.
So Paul, in episode 198, we had talked about
Cleveland.com, editor Chris Quinn,
publishing a editorial basically saying that AI
was the future journalism schools were letting down students
if they were not telling them to embrace AI.
He even talked about how his own newsroom
is basically doing what Reinhart has outlined,
which is letting AI generate the stories,
keeping reporters focused on reporting.
It's kind of just keeps getting messier
and more confusing.
There's a lot of powerful feelings around this.
Yeah, with the associated press stepping in,
even if it's not a formal associated press position,
and it's just been one out of within,
that affects the industry greatly for people
who don't maybe understand the journalism industry.
Associated press having a say in this is significant.
And I can say firsthand journalism schools
are really struggling to figure out how to adapt
their curriculum, what the future of that industry
looks like, how to prepare their students.
We've talked about this on a few recent episodes.
I don't know the resolution to this one.
Like a new and I was close to this one is probably any,
given that we both have a background in journalism.
I don't know how this ends, but this industry's
going to be faced with some very significant challenges
adapting to AI because I think one of the biggest
friction points we see in AI adoption
is a willingness of the people within a company to do it.
And many journalists, most journalists,
didn't go into the profession to become independently wealthy.
They went into the profession because they believe deeply
in the stories that needed to be told
and the impact those stories would have.
And to tell them, you're not going to do that anymore.
You're just going to have AI do that.
You're taking away the reason why they do the job.
And so you're going to just have massive resistance to that.
And I don't know that the journalism schools
are going to create enough.
I don't only want to use the term AI for,
because I'm not even sure I afford applies
to what they're asking these people to be.
Human assistance to the AI, it's almost a reverse thing.
Like they're asking for the AI to be the thing
that the human is just there to assist the AI.
And I can't imagine journalism schools
going to be pumping out those kinds of people
from their classrooms.
So I don't know how this gets resolved.
And I've thought about this one many times through the years.
And it's telling to see some of the writing on the wall.
So to speak again, not agreeing or disagreeing with it.
But okay, the Associated Press distances themself
from the comments, fair.
But this person saying this stuff is not a journalist.
This person's job title is literally
Senior Product Manager for AI.
That should tell you about the Associated Press's priorities.
Correct.
Yeah, you can say they don't support it,
but it doesn't.
Right. It's not true.
Right.
It just means they're trying to stop a shit storm
for a while until we figure out like plan B here.
Yep.
All right, so next up in video CEO Jensen Huang
made a big statement this week
regarding the booming AI agent space.
He publicly called the new open source AI agent
to open claw quote the most important software release
probably ever.
So we've talked about open claw before.
It's kind of acts as an always on digital employee
running locally on your machine.
And you can interact with it by messaging it.
It's basically an AI agent that will go do whatever you ask
it to do and try to do stuff for you on its own.
And the adoption of this tool has been pretty unprecedented
by early February after its release.
It broke records by hitting over 145,000 GitHub stars
and drawing 2 million visitors in a single week.
This was created by an Austrian developer named Peter Steinberger
who built the initial prototype in literally like an hour
because he wanted it to exist.
It became this viral success.
And he became a highly sought after developer in Silicon Valley
and actually recently accepted a lucrative offer
to join open AI.
So Paul, we kind of talked a bit about open claw's rise,
Peter's story, him joining open AI.
But it's pretty interesting.
Jensen is now calling this the most important software
release ever.
Do you think he's right on that?
I don't know.
I mean, I think it's gaining a lot of traction.
I think the underlying potential of it,
assuming it's secure and safe and anybody
could do something like this and then it can speed up work.
If he's projecting that future, then I
could see that maybe being true.
It's hard to like, it's a very abstract thing
to try and wrap your head around and see what he's seeing.
I did retweet something from Ali Kay Miller.
Let's see what this was.
This was March 5th.
And she had tweeted, I went to a sold out open claw meetup
in New York last night.
Let me tell you what I learned.
And then she goes point by point, not just hit a couple of these.
She said, not a single person thinks
that their setup is 100% secure.
So like, these are like advanced users
and they have no idea if it's secure or not.
One open claw expert said he has reviews setups
from cybersecurity experts and laughed.
His statement to me was, if you're not okay
with all of your data being leaked onto the internet,
you shouldn't use it.
It's a black and white decision.
My God.
Again, as I've said with open claw,
like I don't mess with the stuff.
This is one of those where like,
this just validates to me why I haven't accepted the risk yet
is because the experts who know the risks
are just laughing and being like, dude,
everything's accessible.
Like this can all be leaked on the internet
whatever you give it access to.
So I'm in no hurry to personally test open claw.
I'm happy to read about other people doing it.
But it is just one of those things.
Like I will happily show up late
and when it's safe and secure and easy to understand.
Then I'll dip my toe in, but in the meantime,
I'll trust Jensen on this one.
Right.
Trust me.
The litmus test has Jensen given open claw access
to one of his machines.
I'm out of machines.
Yeah.
But he loves all the inference it's using.
All the computers using it to do inference up here.
Yeah.
And when Jensen says something, he sells chips
that are used not only in the training of the models,
but the inference when you and I use them
and things like open claw use a ton of tokens
which draws on compute power from his chips.
So.
Doesn't mean it's not true.
It has a stake in the game basically as I'm saying.
All right.
Next up, the debate over who owns AI generated art
has a new development.
The US Supreme Court is officially declined
to hear a landmark case on the issue.
So this case centers on Missouri computer scientist Stephen
Thaler who has been on a year's long legal crusade
to secure intellectual property rights
for algorithms that he created.
So this all started back when Thaler tried to copyright
an AI generated image and he was kind of intentionally trying
to test the legal limits of our existing copyright system
by intentionally listing his AI system
as the sole author of this work with zero human creative input.
So at the time the US copyright office rejected his claim,
a district court judge upheld that rejection in 2023,
famously ruling that quote human authorship
is a bedrock requirement of copyright.
And after a federal appeals court affirmed the ruling in 2025,
Thaler asked the Supreme Court to step in
arguing that denying the copyright creates a quote
chilling effect on anyone else considering using AI creatively.
So by declining to hear this appeal,
the Supreme Court is letting the lower court rulings stand,
which basically cements the copyright offices current guidance
that purely AI generated art based on text prompts
cannot be copyrighted.
So Paul, this doesn't solve AI copyright,
but is at least one step forward
in at least a little bit of clarity I would guess here.
At least when it comes to AI generated art specifically.
Yeah, I don't know.
I mean, this is, it's good that there's like rulings
starting to come.
I just expect this is the very front end of this.
Like there's so many unknowns still related to AI and creativity
and intellectual property.
We had crystal laser and IP attorney has created some content
and some courses on AI Academy specific to this.
We've had around some, you know, some summits.
Samantha Jordan is another IP attorney that we had actually
was at AI for agency summit.
I think Samantha present.
Yes, yes.
So this is an ongoing topic that we're constantly paying attention to.
And this is one of the rare moments where it seems like
there was something definitive happen.
Yeah, but I don't know.
I mean, I'm no lawyer.
I just have learned that it seems like there's never
an actual end to these things as long as someone can challenge
something.
So we'll see.
And the other thing I would factor is the current government
hates copyright.
So I don't know what that means.
They always seem to find a way around laws when they don't like
them.
So maybe I don't know where this goes on sleep.
Well, some other legal related news here.
Meta is facing a massive new class action lawsuit
over its AI smart classes.
Last year, Meta sold over seven million pairs
of its Ray Bands smart classes.
They heavily marketed these two consumers
with promises like, quote, design for privacy controlled
by you and, quote, built for your privacy.
However, a recent joint investigation
by some Swedish newspapers exposed a disturbing reality.
Footage captured by the glasses
is routinely routed to a subcontracting firm in Kenya
for human review.
The lawsuit alleges that overseas workers at this firm
have been reviewing highly intimate and sensitive footage
from users daily lives, including people
using the bathroom, undressing, and having sex.
While Meta claims it uses algorithms to blur faces
and identifying information before the footage is reviewed,
sources dispute that this consistently works.
Worse users cannot opt out of this data pipeline.
So the lawsuit's being filed by a public interest law firm
and charges Meta and its manufacturing partner
Luxotica with violating consumer protection and privacy
laws, arguing that no reasonable consumer
would expect their most private moments
to be watched by human contractors.
How concerned Paul should people be about this?
If you didn't know how this is how AI training works,
I would imagine you're probably very concerned.
This is how AI training works.
Humans look at things, conversations, videos, images.
They have to have humans that do it.
So if you thought that like everything on your Meta glasses
was just for you, welcome to reality.
Like, I don't know.
I mean, again, sometimes I have this assumption
that people are like roughly aware of how these things
work and what is actually private.
I would just say that it's probably pretty safe to assume
that there's a lot less private than you think
when it comes to all of this stuff.
And it is disturbing stuff to read.
Especially if you've never seen how this sausage is made.
Is that the same?
Yeah.
Like, there you go.
Yes, humans look at this stuff that you record.
They look at your photos.
They look at your prompts, even your most intimate things.
They there is probably a human on the other end
seeing that stuff.
So if you trust the anonymity of that,
then be careful of reading too much
into those marketing tag lines that Meta's putting out there.
It sounds like this.
Yeah.
And again, like I've always said this to like,
I've never watched an episode of Black Mirror.
People behind that rod.
I don't need to watch Black Mirror.
Like, I know technology works.
And I would imagine there's probably Black Mirror episodes,
something I've never really said to me.
They're like, oh, did you see the Black Mirror episode
where they did?
I didn't, and I won't, because I don't need to see it.
But yes, assume like when you're watching these Black Mirror
things and something's like super creepy
that it's probably actually how it works.
All right, our last topic this week, which we got just
before going on the air, actually.
Microsoft just announced a pretty big evolution
to its enterprise offering.
They're calling this co-pilot co-work, which
is designed to help Microsoft co-pilot take action
by completing tasks and running full workflows on your behalf.
So instead of just answering a question
or drafting an email, co-work acts more like an autonomous
agent.
You describe the outcome you want.
It builds a plan that continues in the background
with clear checkpoints so you can confirm its progress.
This is powered by a system called WorkIQ.
And so using this system, co-pilot co-work draws
on signals across outlook, teams excel,
and the rest of Microsoft 365.
So it can ground its work in your specific emails, messages,
and data.
And Microsoft has highlighted several huge use cases
for this.
For example, you can hand over calendar triage to co-work.
It will automatically review your outlook schedule, flag
conflicts, or low value meetings, and proposed changes
like declining meetings and adding protected focus blocks.
You could also ask it, for instance,
for a client meeting.
And it will automatically pull relevant inputs
from your emails and files to generate a briefing
document, supporting analysis, and even a shareable slide deck.
So this system is designed to keep you in the loop, though,
by checking in if it needs clarification,
allowing you to approve recommended actions
before they are applied.
So co-pilot co-work is currently
being tested in a limited research preview
and will roll out more broadly in late March, 2026.
Paul is a pretty big deal, but not unexpected.
We've been waiting for basically every other lab
to create the clawed co-work of their particular offering,
haven't we?
It seems like the most obvious thing.
And everybody seems at least six weeks behind with Claude.
So yeah, it'll be a big deal.
When it becomes functional for the average user,
the clawed co-work, we did actually just
have a Jenni app review on Friday.
It came up from Katie Rober, who's
AI Academy Instructor for SmartRux.
And she did a 20-minute demo on clawed co-work.
So it's like one of those tools that
just seems extremely valuable in a work environment.
I think it'll be quite a while before we see enterprises
like really gravitating towards these things,
just given how slow people are to just use Jenni app
period.
But these are the kinds of things that
could start to have a compounding effect
within organizations and value creation
when they become more readily available to everybody.
Yeah, I would imagine especially just
given the enterprise focus of Microsoft co-pilot
and how slowly some of these companies
can move if this works anything as well as co-work.
As I know, co-work does.
And people actually start turning it on and implementing it.
It's going to be maybe a pretty big wake up call
for your average knowledge worker that
might not be as on top of this stuff, right?
Where you're like, maybe this is the moment
you talked about instead of the data or the benchmarks.
It's like, wait a second, it can just do work for me.
Right, and just point it to a folder and it can do stuff.
Yeah, it may actually be a key aspect of solving
the lack of adoption that we talked about earlier
and the need for all these consulting firms.
It's like, maybe they just need co-work.
And make it easier for people to use.
Yep.
All right, so that's a wrap for this week.
Paul, just one quick announcement.
We are, of course, running this week's AI Pulse Survey.
We're going to ask about has the nthropic Pentagon situation
changed how you think about which AI company you use?
And also, how would you describe AI access
at your organization right now?
So if you go to smarterx.ai-flored-slash-pulse,
you'll be able to take this week's survey.
We'd love to hear from you on this.
So far, I would say number one is it's
changing some people's minds because there
was a nthropic was reported to be hitting a $19 billion
annual run rate right now.
And last I checked, there was still the number one app
in the app store.
So somebody switched into nthropic.
Somebody is all, yeah.
2 million of people might be switching to nthropic.
It'll be interesting to see if companies follow.
I've heard some rumblings from people
that their company may actually be switching based on this,
but I don't know if that's actually going to happen.
Yeah, we'll see.
Everything else inflows.
Before, sure.
Well, Paul, even though we said it's kind of a lighter week,
we plenty to talk about this week.
Heavy topics.
Heavy topics, like breaking news.
Well, I appreciate you breaking down everything for us.
So, Rick, thank you.
And until next time.
Yeah, and I think we will have one of the second podcasts
this week.
Mike, it looks like we've got an AI for answers
for departments specials.
We're going to go through marketing sales
and customer success questions from our AI for departments
webinar week.
And so it looks like we're going to have
episode dropping on Thursday as well.
So join us twice this week, and then we'll be back
for the regular weekly on St. Patrick's Day, March 27th
or March 17th, we'll be the next weekly.
All right, Mike, thanks again.
Talk to everyone next week or Thursday.
Thanks for listening to the Artificial Intelligence Show.
Visit SmarterX.ai to continue on your AI learning journey
and join more than 100,000 professionals and business leaders
who have subscribed to our weekly newsletters,
downloaded AI blueprints, attended virtual and in-person events,
taken online AI courses and earned professional certificates
from our AI Academy, and engaged in a SmarterX Slack community.
Until next time, stay curious and explore AI.
The Artificial Intelligence Show



