Loading...
Loading...

Cal Newport takes a critical look at recent AI News.
Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: https://bit.ly/3U3sTvo
Video from today’s episode: youtube.com/calnewportmedia
ARTICLE #1: America Isn’t Ready for What AI Will Do to Jobs [2:15]
ARTICLE #2: Mass Hysteria. Thousands of Jobs Lost. Just How Bad Is It Going to Get? [9:23]
ARTICLE #3: THE 2028 GLOBAL INTELLIGENCE CRISIS: A Thought Exercise in Financial History, from the Future [14:39]
Links:
Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow
https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/
https://www.nytimes.com/2026/03/05/opinion/ai-jobs-white-collar-apocalpyse.html
https://www.citriniresearch.com/p/2028gic
https://www.nytimes.com/2026/02/25/business/citrini-ai-stock-market.html
https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/
Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
There have been some pretty dark articles published recently
about all the ways in which AI is about to destroy
the worldwide economy.
Now, these include tales of mass unemployment
and collapsing industries and white collar workers
trying to retrain for skilled craft jobs
like woodworking and plumbing.
One of these pieces, a World War Z style dispatch
from the year 2028, which was put out
by a small financial services firm named Satrini Research,
spread so widely and scared so many people
that it was blamed for a temporary dip in the S&P 500.
All that's missing from these tales are the garbage can fires.
So how seriously should we take these economics
Doomsday articles?
Well, if you've been following AI news recently,
this is probably a question that you've been asking
and today I wanna try to find some measured answers.
I'm Cal Newport and this is the AI Reality Check.
All right, here's the thing.
Coverage of AI topics moves in waves.
You'll have a certain sort of take or idea
that will become popular and everyone is writing
and talking about it and then sort of seemingly all it wants.
All the attention will move on to a new topic
as if the other one didn't exist.
Like back in 2023, for example,
I spent a lot of time trying to explain to people
that a static feed forward, large language model
could not be considered conscious.
I had fierce debates about this and at some point,
the whole conversation just moved on with no resolution.
Late last year to give another example,
all the discussion was around super intelligence
and I found myself having to argue about how
you cannot infer intention in an anthropomorphized manner
from the autoregressively produced outputs of a chatbot.
But then we've moved on from that recently as well.
The topic de jour in AI coverage is this idea
that we might not be ready for mass economic displacement
that AI is now poised to wreak.
Now, I want to go over quickly a few examples
among many of some of the articles recently
that I've been making this point.
The first article was published online in February
and it's part of the March print issue of the Atlantic
and it was titled America Isn't Ready for What AI Will Do for Jobs.
All right, so if you read this piece,
it opens on a somewhat long history
of the Bureau of Labor Statistics,
which is actually quite interesting to the history of the BLS.
And so you're thinking, okay, maybe this is going to be
a sort of thought-provoking exploration of job cycles
and technological disruption, but nope.
It gets a little darker.
Let me read from the piece here.
But like all statistical bodies, the BLS has its limits.
It's excellent and revealing what has happened.
It only moderately useful at telling us what's about you.
The data can't foresee recessions or pandemics
or the arrival of a technology that might do to the workforce
what an asteroid did to the dinosaurs.
I'm referring, of course, to artificial intelligence.
Yikes.
Remember the asteroid that killed the dinosaurs
killed off most of life on Earth.
So we've kind of raised the stakes pretty high
for what's about to happen with AI.
All right, so the article goes on.
The author says, tasks that once required skill judgment
and years of training are now being executed relentlessly
and indifferently by software that learns as it goes.
I don't know what it means for a language model
to be relentless or indifferent, but I guess they are.
Quick fact check, the language model is driving
most of the tools that we're talking about here.
They don't learn as they go.
I say they're static and trained and static batches.
I guess you could make a case that if you're looking
at like a terminal agent like Claude Code
that it could be doing updates to a markdown file
that it uses as part of its prompting,
but I don't think that's a great understanding
of how this AI works.
It's treated more like a human brain.
All right, let's keep going here.
But anyone subcontracting tasks to AI is clever enough
to imagine what might come next.
A day when augmentation crosses into automation
and cognitive obsolescence compels them to seek work
at a food truck, pet spa or massage table,
at least until the humanoid robots arrive.
Man, the word might does a lot of work in this essay.
He said before, AI might be like the asteroid
that destroyed 99% of life on earth.
And here he said, AI might make us all have to work
at pet spas until the robots come.
But there's evidence for this.
So what's the main argument for why we should be concerned
about this?
Let me read from the article again.
In May 2025, Dario Amade, the CEO of the AI company
on Anthropics said that AI could drive unemployment up
to 10% to 20% in the next one to five years
and quote, wipe out half of all entry-level white color jobs
and quote, Jim Farley, the CEO of Ford,
estimated that it would eliminate literally half
of all white color workers in a decade.
Sam Altman, the CEO of OpenAI, revealed that quote,
my little group chat with my tech CEO friends
and quote, has a bet about the inevitable date
when a billion dollar company is staffed by just one person.
Step out of the quote here.
The Atlantic piece then goes on to mention layoffs
that recently happened at many companies,
including Meta, Amazon, United Health, et cetera.
All right, back to the quote.
Taken together, these statements are extraordinary.
The owners of capital warning workers
that the ice beneath them is about to crack
while continuing to stop on it.
All right, we got to hold on for a second here.
I want to break apart.
This is the evidence for the claim that,
well, we got two claims.
Either all life on Earth is going to be wiped out
like the dinosaurs or knowledge workers
are going to have to be a massage therapist.
It's worth taking a little bit closer
and work at exactly what this evidence is stating.
I want to start with the layoff piece
because we covered this in last week's episode
of the AI reality check, and I've covered it
on my newsletter at Kellnuport.com as well.
We don't, for the most part, these layoffs have nothing
to do with AI, automating jobs or increasing efficiency
to the point that you don't need more workers.
Now, I haven't covered every one of these companies
mentioned in this article, but I did cover
the first two companies mentioned, Amazon and Meta.
I've talked on background to multiple people
within both of those companies, and they're both very clear.
Recent layoffs have nothing to do with AI
making those workers unnecessary.
They have everything to do with over hiring
during the pandemic that's now being corrected for.
The bulk of the layoffs at Meta recently
were in the reality labs, which Zuckerberg
had put a massive amount of money in over the last five years
to try to build the metaverse
where we're all gonna put on virtual reality helmets
and float around space stations and play cards.
Remember that?
Yeah, it's a bad idea.
So they're firing a lot of those people.
They want to put that money elsewhere.
So right off the bat, okay, this is by reporting 101.
You take a fact that you have a scenario that's scary,
and then you take a fact that directionally
seems aligned with that scenario,
but in reality is not, and you list it next to it
to try to ground the hypothetical
into something that's happening now,
which vastly increases its value
to actually cause anxiety or fear.
All right, but what about the other piece of this argument?
The idea that AI CEOs are making dire predictions.
If the owners of capital are warning us,
then for sure we have to listen.
But wait a second, we could flip this on its head.
Of course, the CEOs of AI companies
are making dire predictions about how powerful
their tools are gonna be, because they are like
the wizard and wizard of Oz say,
don't look behind a curtain, don't look behind a curtain,
terrified that people are gonna spend more time asking
about their financials, asking about the fact
that in order for them to keep up with their debt,
I'm talking about the major AI companies,
and not face implosion over the next one the two years.
They need to be the fastest growing companies
in the history of companies.
We're talking about hundreds and hundreds of billions
of dollars of revenue that needs to be generated
at some point in the next year or two.
And it's unclear how they're gonna do this
beyond putting ads on chat GPT
and cloud code subscriptions,
which they're currently losing money on.
So yes, of course they would rather be talking
about dire predictions of some future,
because guess what?
That makes their technology the most important
technology in the world and justifies investors
continuing to put money into their company.
So I'm not saying that's definitely what's happening,
but I don't have to stretch to find
an alternative explanation for why Dario Amade or Sam
Altman love the spout out these sort of big predictions.
It completely serves their purpose.
All right, now I wanna say look,
this is it's a good writer, the rest of the art,
it's a good article after this, like it's well researched,
he talks to a lot of people,
you learn a lot about labor statistics,
you hear from a lot of experts,
but I just wanna kind of point out the core,
the beginning of the article has this combination
of vibe reporting and appeal to biased authority,
that as we're gonna see is sort of a theme
in these economic doomsday's article.
All right, let me talk about another one.
Our second example here, this was from last week,
I think in the New York Times,
it was an op-ed that had a happy feel-good title,
mass hysteria, thousands of jobs lost,
just how bad is it going to get, geez.
All right, so the piece opens,
you know, you don't choose the titles
if you write an op-ed, so let's put that aside.
Let's look at the piece,
let's see what it actually argues.
The piece opens with the story of a college graduate
having a hard time finding a job,
let me read this here, just a few years ago,
an entry-level role with a bank
for an asset management firm might have been Mr.
Griefenberger's for the asking,
but the white-colored job market has cooled sharply,
while the unemployment rate remains relatively low,
4.3%, office jobs are suddenly a lot harder to come by
for recent college graduates
and experienced professionals alike.
Now, this is an important real story.
Unemployment's pretty good,
but there is a cooling, especially on entry-level hiring
in knowledge work jobs that has been persistent really
for multiple years now and isn't yet improving.
All right, so why is this happening?
Well, you can ask economists
and there's three reasons they'll give you
in a descending order of importance.
By far, the number one reason,
most important reason explaining this trend
is that white-color industries
hired aggressively in 2020 to 2022
as pandemic era digital growth was super strong.
And there was like these great resignation fears
which led companies to overcompensate
and offer like very attractive packages
was like get people into door
because we're worried about losing workforce.
All right, now after that pandemic period is over,
the economy is trying to correct for this
and we have a lot of employers not firing people,
but they're going into what's called a no hire, no fire phase
where they say, okay, we need to slow down here.
We have too many people.
We don't most of us don't wanna do mass layoffs
of too many people because we might,
you know, they might be useful in the future
but let's do no hire, no fire,
which is how you get to this unusual situation.
We're unemployment's actually pretty good
but you also have low new job growth.
All right, the secondary cause mentioned by economists
is the higher interest rates,
they started going up in 2022.
They try to offset the inflation caused
by COVID era stimulus investments.
That slows down business expansion, right?
That's economics 101.
The third cause is global uncertainty, right?
With especially in the American context,
the tariffs was happening in educational
and now educational world
and now we have global wars.
It's an uncertain time.
So there's a lot of businesses
that are sort of like, let's just wait and see.
We don't need the, we don't sound the alarm bills yet,
we don't have to greatly reduce
like we would into a strong recession
but we're not gonna, let's be careful
about hiring right now as well.
All right, so let's return now to that time's not bad.
I'm sure it says like this is what explains this.
So, you know, it is what it is.
Hopefully this will get better.
All right, let's read what actually say instead.
Many companies went on hiring
to freeze current of the pandemic
and the slowdown is perhaps just inevitable adjustment.
All right, so far so good.
Are we gonna leave it there?
Nope, here's what comes next.
But it is happening against the backdrop
of the generative AI revolution
and fears that vast numbers of knowledge workers
will soon be evicted from their cubicles
replaced by machines.
There's kind of a remarkable statement
because it's vibe reporting
but it's vibe reporting that's transparently acknowledging
that it's vibe reporting, right?
They're saying, look, there are good explanations for this
but this other thing is happening now that makes us afraid.
So let's just pretend they're connected.
Even though we have other explanations,
it's directionally aligned with this other fear we have
so why don't we just put them together?
What is the main evidence side in this op-ed
for these fears?
I'll quote here.
The people that the people selling
the artificial intelligence
are among those sounding the most ominous warnings
about its potential fallout is notable.
Some of them are prone to bombastic claims
but it's hard to see how spooking the public serves their interest.
It might be wise to take their predictions at face value
and assume that AI is indeed going to power
a lot of white color jobs.
Again, this is the appeal to biased authority.
It is not hard to see why the CEOs of the company
selling this technology like stories
that makes this the most powerful important technology
of the last 200 years.
Of course they want that story out there
because without that story, again,
it becomes how are you going to generate
300 billion dollars in revenue in the next two years?
They don't want that question.
So they've been spouting these things for the last five years.
I don't know why this idea of like we need to take it face value
what the owners of the technologies say
about what their technology is going to do.
I don't think we should take them at face value
at all.
We should be highly suspicious of them.
All right, so anyways, again, this article goes on
and it looks at a lot of things.
It's not a bad article, but again,
we have this sort of vibe reporting,
mentioned stuff that's happening
that's directionally aligned with the fear.
Then you mention the fear and then you justify the fear
by saying, look, the CEOs of these companies
are the ones selling the alarm.
Why would they sound the alarm if it wasn't real?
All right, let me get to the third article
which is the one that spooked a stock market
and this will be the sort of final example I point out here
before I get to some stronger responses.
This article was called the 2028 Global Intelligent Crisis,
Intelligence Crisis, a thought, exercise,
and financial history from the future.
It was published on Substack by small financial services
for called Satrini Research.
Now look, right off the bat,
if you read this Substack piece,
the authors are clear that they say this is a thought experiment
and not a prediction.
And you'll hear actually that the authors
have been interviewed a lot in the aftermath
of this article going viral and spooking people.
And they're really leaning into this.
This was just a thought experiment.
I was writing fan fiction.
Like, why are people taking this so seriously?
But if you read their same introduction,
they then go on to say, hopefully reading this
leaves you more prepared for potential left-haired risk
as AI makes the economy increasingly weird.
So clearly they're saying, this is a possibility.
This is a prediction.
We're not saying it will definitely happen,
but it's on the table and we need to be worried about it.
So I don't think they get off the hook by saying,
hey, we said this is not a prediction,
but you did say pay attention to this
or you're prepared for what might come.
I'm not a linguist,
but that kind of sounds like the definition of a prediction.
All right, so what does this article actually say?
Well, it is written in the style of World War Z.
That is, it's written like a dispatch,
and I think it's like a financial report
like these companies write,
but from the year 2028,
reflecting on the dire current circumstances
and how the economy got there.
So it's told in this sort of fake future retelling style,
which is a very powerful style.
Let me read a quote here from early
in this sort of fake dispatch from the future.
Two years, that's all it took to get from contained
and sector-specific to an economy
that no longer resembles the one any of us grew up in.
This quarter's macro memo is our attempt
to reconstruct the sequence,
a post-mortem on the pre-crisis economy.
And then it goes on to lay out the scenario
where it starts like right now.
And it's like, well, there's layoffs happening,
but we were happy about productivity, booms,
and the stock market goes up until about the fall of 2026,
and then as automation continues,
these cyclically reinforcing negative feedback loops emerge,
the economy crashes the next year in November of 2027.
And again, we're back to garbage can fires
and knowledge workers having to eat their dogs.
All right, this was a very effective article.
It spread really far for two reasons.
One, that World War Z style of storytelling,
where you're telling a story,
like this is what happened, let me look back on it,
is very emotionally engaging,
and it presses fear buttons much more
than sort of straightforward analysis or prognostication.
And two, there's a vibe reporting trick here
that we've seen in the other two examples.
They peg their fake scenario
to something that's real happening right now.
It began with layoffs in the tech sector in 2026,
which there are happening right now.
Now of course, as I've covered in this episode
and the last episode in ad nauseam,
the life of the tech industry started a few years ago,
it's in response to overhiring during the pandemic,
but whatever, when you peg a story
that in somewhere fantastical and terrible,
with something that's happening right now,
your mind puts it on a reality trajectory,
and it makes it much more believable.
So that went viral.
It was, people said it had to do with a collapse
and not a collapse, a minor dip in the S&P 500.
Other commentators have said,
there's a lot of factors
why there might have been that temporary collapse
in S&P 500, but it got a lot of news,
especially in the financial world.
All right, so how seriously,
I mean, I talked about some of the bad reporting techniques
in these articles, but it doesn't mean,
a priori that they're also wrong.
So how seriously should we take these scenarios
of economic doom?
My gotta say, they're very anxiety-provoking.
I don't like dystopian fiction, right?
Like I read World War Z, I really didn't like it.
I don't like watching zombie movies, dystopian,
especially like collapse of society, tales, and movies.
They press a lot of buttons for me.
So I'm someone who knows a lot about AI
and in a critic of hype,
but even for me, these were distressing.
So I can only imagine how much distress
these type of articles are causing
for the millions of people that are reading these
in major publications.
So how seriously should we take them?
Let me tell you what made me feel better,
and hopefully it'll make you feel a little better as well.
In the wake of the Satrini article,
because that's spread through the financial world,
and it might have had an actual impact on the stock market.
In the wake of that Satrini article,
professional, economic, economist,
and global macro strategy analyst,
people who, their goal is not engagement
or impacting the conversation.
It's to make money based on accurate understandings
of what's likely to happen in the economy.
They came out of the woodwork and said,
hey, enough, this is ghost stories,
and they're not, we have no reason to believe, they're true.
And hearing from these economists, I have to say,
made me feel a little bit better.
I'm gonna give you some quotes,
and hopefully it'll make you feel a little better as well.
The New York Times, their credit,
published an article called,
Bleak Research Report, Stokes AI Debate on Wall Street,
is written by financial reporter,
and they actually quoted some serious economist
who were not that impressed by the Satrini article.
Let me read you two quotes.
Here's one.
The argument leans heavily on narrative and emotion
rather than hard evidence.
Jim Reed, a strategist at Docha Bakes said of the report,
that doesn't mean it will ultimately be wrong,
but he added that the vibes of substance ratio
is undeniably high.
Right, here's another quote.
On Tuesday, Christopher Waller, a governor on the Fed board,
noted that he had not read the Satrini report,
quote, deeply in quote,
but pushed back on the broader idea
that AI will lead to a rapid rise in unemployment
as technology displaces white color workers.
I don't think that is gonna happen, Mr. Waller said,
adding that he is not a doom and gloomer,
like that report was.
I think my favorite response, however,
came from Citadel Securities.
So a global macro strategy analyst
for Citadel Securities named Frank Flight
put out a report in the aftermath of the Satrini article
that had a sort of sarcastic title,
the 2026 Global Intelligence Crisis.
So the Satrini report was a 2028 Global Intelligence Crisis.
He said, hey, everything has gone wrong in these two years.
And so he called it the 2026 Global Intelligence Crisis,
but here he's referring to the intelligence crisis
being people believing these types of stories.
And so he does a sort of faux opening.
He's like here, he's doing a describing our current situation
and that sort of faux opening, describing our current situation
sort of sticks in the dagger with the following.
Despite the macro economic community struggling to forecast
two month forward payroll growth
with any reliable accuracy,
the forward path of labor destruction
can apparently be inferred with significant certainty
from a hypothetical scenario posted on sub-stack.
He's sort of making fun of people in the community
who were taking that sub-stack post with any seriousness.
He then proceeds to kind of educate
in a semi-accessible way the types of things
that global macro financial analysts look at,
especially when it comes to technological disruption
and why they don't see signs of some sort of major calamity
coming and they're not particularly worried
about some sort of collapse of the economy.
I'm gonna read a few of these quotes
just to give you a sense of the type of things
covered in this article.
Number one, we would posit that if AI represents
imminent displacement risk,
the real-time population data was shown in inflection upwards
in the daily use of AI for work.
The data seems unexpectedly stable
and presents little evidence of any imminent displacement.
Right, so again, there's lots of discussion about this,
but they're looking at the Fed's data
of the St. Louis Fed and they say there's no rapid uptake
in the way that the news media would have you believe
in AI use.
Second quote, the current debate around artificial intelligence
conflicts the recursive potential of the technology
with expectations of recursive economic deployment.
Technological diffusion has historically followed
an S curve.
Early adoption is slow and expensive.
Growth accelerates as cost fall
and complimentary infrastructure develops,
eventually saturation sets in
the marginal adopter is less productive or less profitable,
which causes growth to accelerate.
I'm seeing this argument from a lot of professional
analysts of technological disruption.
They say, man, we always make the exact same mistake.
You have slow and then you get a period of speed up
and we say that speed up will go on forever
unless keep extrapolating out that curve
and if we keep extrapolating out that curve collapse
or singularity or whatever the thing is
that you want to say is going to happen.
But this is never what happens.
It S curves goes up and then it begins
with other sort of factors contained
to growth that goes slower than you think there's time
to adjust.
They say, have no reason to believe why would this be different.
All right, let me read another quote here.
Displacing white color work would require orders
of magnitude more compute intensity
than the current level utilization.
If automation expands rapidly,
demand for compute definitionally rises,
pushing up its marginal cost.
If the marginal cost of compute rises above
the marginal cost of human labor for certain tests,
substitution will not occur,
create a natural economic boundary.
We don't have nearly enough compute for these scenarios.
And as they're saying, as you try to build out compute
for more and more use, it's going to drive up the cost
because we're going to have a mismatch
between demand and actual supply.
As the cost comes up, it drives back down the demand.
We're already actually seeing this what the one sector
where after five years of work,
we're finally seeing tools, this is the best case scenario
for AI, we're finally seeing tools
that are really catching the interest of a sector.
And that's in computer programming.
All of the evidence I can find right now
seems to imply that these companies
are selling the compute for these agents
for computer programming at a significant loss
because they're trying to fight for market share
when they have to actually go
because again, they have huge debt.
When these companies actually have to try
to make more profit off of this
and these costs get adjusted to the reality
of how much expense they're incurring at the AI companies,
you're going to see like a real moderation probably
like how much we use for programming
and is it really worth, is it worth $2,000 a month
for an individual, $5,000 a month?
I mean, it's going to be interesting
and that's just for this one first use case.
So I think that's interesting to see as well.
They also say, quote, moreover,
there's little evidence of AI disruption
in labor market data as of today.
In fact, the forward looking components
for labor market tracking have improved recently.
So huge mismatch between what the financial analysts
are seeing and what the op-ed writers are hypothesizing.
The evidence of the financial analyst
is their decades of experience
of trying to understand the labor market
and technological disruption,
the evidence of the article on op-ed writers,
Amazon laid off people and Dario Amade says
his technology is the most powerful thing ever.
All right, let me read the conclusion
from this Citadel Securities piece.
For AI to produce a sustained negative demand shock,
the economy must see a material acceleration
and adoption experience near total labor substitution,
no fiscal response, negligible investment absorption,
and unconstrained scaling of compute.
It is also worth recalling that over the past century,
successive waves of technological change
have not produced runaway exponential growth,
nor have they rendered labor obsolete.
Instead, they have been just sufficient
to keep long-term trend growth in advanced economies
near 2%.
Today's secular forces of aging population,
climate change, and de-globalization exert downward pressure
on potential growth and productivity.
Perhaps AI is just enough to offset these headwinds.
So they're sane.
And I think this is actually pretty optimistic.
They're saying the reality of major,
major disruptive technological changes historically
has been just enough to offset all sorts of negative trends
and keep at least some growth happening in the economy.
And they say, well, we hope for it.
Here's what they're predicting from AI.
They're like, we have lots of negative growth forces
that we're gonna have to encounter
in the next couple of decades
that are gonna pull down the economy.
Hopefully we'll get enough out of AI
to sort of stave those off
and still get at least some economic growth.
That is a very different vision.
Like AI is the latest technological innovation
to stave off de-growth is a completely different argument
than, no, no, this is the one technology
in history where the S curve doesn't happen
and it's gonna go exponentially
and it's gonna crash the economy.
So they kinda end on a positive note there.
All right, so here's, let's step back.
First of all, I wanna say the economists make me feel better.
It doesn't necessarily mean, of course, they're right
and maybe we are gonna have all these factors
will come together right to destroy the economy
but I do like the fact that the economists aren't,
they're not that worried about it.
I think we see this reflected in the stock market
where we're seeing, you know, again,
if serious investors really believed
that the economy was gonna crash in the fall of 2027
and that we're gonna have massive decline
starting in October of 2026,
the COVID dip from 2020
is gonna look like a minor correction, right?
Like it would be substantial.
But the reactions are small.
Like they're actually being,
they're pessimistic on the frontier AI companies
because they think they're spending too much money
so they don't buy the AI tech CEO stories
that their technology is gonna automate all work
which would make them the most valuable companies
in the history of companies.
The stock market doesn't buy it.
We see more moderate bets against specific sectors
where they think they're gonna have practical disruption
like the SaaS sector and even those are modest.
And we're seeing actually much bigger reaction
from things like the cost of oil
going up to $100 barrel.
That caused way bigger impacts on the stock market
than the scenarios of the last two months
about the economy collapsing.
So to me, that makes me feel better
but it doesn't mean there's not gonna be an impact
and they could be wrong
or maybe the impact is gonna be smaller.
But let's put that on the table now, right?
Let's say, okay, maybe the economy is not gonna collapse.
I don't have to learn how to light a garbage can fire
or become a pet masseuse.
But maybe we're gonna have like,
it's gonna be a hard run.
There's gonna be economic disruption
and it's gonna be like more so than almost
any other technology in the past
and anyway, it's gonna be disruptive in some way.
Let's say that was the case.
If that is and it could be true
and I hope not, but it could be true.
AI Doomsday reporting isn't helping.
What I'm seeing is that these sort of AI Doomsday articles
we try to one up each other with how prescient you are
about how bad things are gonna get
prevents us from responding in effective ways.
If we instead treat AI like a normal technology
and we respond with our normal tools
when we see it doing things that we would normally say
this is a problem that we need to correct.
I think we can have much better progress
in containing shaping and directing the AI revolution
than instead falling back to these massive
dystopian World War Z tales.
The fall back on Doomsday writing
is letting the AI companies off the hook.
Look at what I covered last week.
Jack Dorsey,
negligently goes off and makes these huge acquisition
sort of in a impulsive fashion
throughout the pandemic of these crypto and blockchain companies.
They don't go well.
So he then impulsively fires half of his workforce
because he can't do anything and Jack Dorsey can't do anything
and measured increments.
Everything he does is drastic, right?
But because he comes out and says
this is just the first sign of the AI economic apocalypse,
I for one I'm learning how to make trash can fires
because I'm gonna not only be a pet masseuse
but have to maybe eat the dogs
because there'll be no money left in the world
because he leaned in the Doomsday reporting
what was the coverage of the block layoffs?
Reporters would rather treat it as evidence
of the narrative economic Doomsday,
that's what they focused on.
In fact, he cited in one of those two art,
one of the articles I talked about,
the block layoffs are cited as evidence of what's coming.
The right way to treat that was like, yeah, sure.
And like I'm sure you have a perpetual motion machine
and you can fly back to the point
what happened to those crypto investments?
Why did you have to lay off that many people?
Who did you lay off?
Wait a second, most of these jobs have nothing to do
with AI-automatable roles.
We would hold this feat to the fire
like you're being negligent and impulsive
but instead we're like, oh yeah, thank you Cassandra
for helping us understand what's coming.
The same thing has happened with these AI CEOs.
They find like the more the more dramatic
and fearful of a thing to say
the more the attention turns away
from what's actually happening.
Journalists used to severely distrust billionaire tech
CEOs but not when it comes to this issue.
We looked at them as like they are guiding us
to understand what's happening with this technology.
These CEOs, I've been covering this,
have been saying crazy stuff for the last four years.
They keep changing what it is on mass.
They were all talking about super intelligence
and the machines getting out of control
and like an alien mind.
They're all talking about that
and they all shifted at some point to something else
and now they've shifted to like the economy is going to cry.
They just follow, they just say stuff
and it's entirely in their favor.
Because again, your technology automates all jobs.
Well, where am I going to put my money?
The only place left to put my money
is in like the three companies
they're going to run all the jobs.
So I think doomsday reporting prevents us
from actually responding.
We're prevents us from saying,
when dairy omit is like 50% of white color jobs
are going to be gone.
I'm like, uh-huh.
You need to make $300 billion somehow
in the next four years in order to like,
stave off, like to get anywhere in your profitability.
How are you doing that?
That's the question we could be asking.
So I think that we don't need to ignore AI
or its impact on jobs.
So we need to cover it like a normal technology
so we can deploy the type of normal things we would do
when we see disruption or changes.
When we see that as cover from Alphysians
or impulsiveness or whatever's going on.
And so I hope we move past.
By the time this comes out,
we'll probably have moved on to, you know, something else.
I don't know what AI and birds are going to spy on us.
Whatever it is.
And I hope so because I think this AI doomsday reporting
not only is stressing people like me out,
but it's prevented us from actually respond
to real impacts of this technology
in a way they could really matter.
All right, enough of my sermon.
Hopefully some of this makes you a little bit
feel a little bit better this week.
We'll be back probably next week.
And do this on Thursdays.
We'll be not every Thursday,
but if there's something to talk about,
I'll be back next Thursday.
Remember, take AI seriously,
but not everything just written about it.
See you next time.
Deep Questions with Cal Newport


