Loading...
Loading...

AI Reality Check: Did the LLM Job Apocalypse Begin Last Week?
Cal Newport takes a closer look at recent AI news.
Below are the topics covered in today's episode (with their timestamps).
Get your questions answered by Cal! Here’s the link: https://bit.ly/3U3sTvo
Video from today’s episode:youtube.com/calnewportmedia
STORY #1: Jack Dorsey announces layoffs at Block [1:28]
STORY #2: The education level of LLM-based tools [11:45]
STORY #3: What’s happening in the world of computer programming? [19:24]
Links:
Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow
Get a signed copy of Cal’s “Slow Productivity” at https://peoplesbooktakoma.com/event/cal-newport/
https://x.com/jack/status/2027129697092731343
https://www.nytimes.com/2026/02/26/technology/block-square-job-cuts-ai.html
https://x.com/emollick/status/2027153371241607420
https://www.youtube.com/watch?v=56HJQm5nb0U
Thanks to Jesse Miller for production and mastering.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Did the FinTech company block just lay off 40% of its workforce
due to AI automation?
Can the best AI models pass a freshman computer science class?
Programmers love agentic AI,
but how exactly are they using these tools?
For those of you who followed the tech news this past week,
these are all pressing questions,
and we're gonna try to find some answers.
I'm Cal Newport, and this is the AI Reality Check.
Now, I wanna do a quick aside
before we get into this week's stories
because this is a new format for my podcast feed.
I wanna give you a quick explanation.
More and more on the main Monday episode of this show,
I've been reacting to the latest AI news
where I put on my computer science hat,
and I try to push back on hype and vibe reporting,
and surface the deeper trends in these topics
that I think really matter,
but not everyone who listens to that Monday episode
wants to hear about this, so I decided I would move
the AI discussion to its own many episodes on Thursdays.
This is an experiment, maybe I'll move it back,
maybe I'll move it to its own feed,
maybe I won't do it every week.
So just bear with me, but keep in mind,
if you wanna share any of these episodes,
we're also putting up on YouTube,
so you can send the video link to someone
who might need to hear some of this reality checking.
All right, that's enough logistics.
Let's get into our first story of the week.
All right, late last week, Jack Dorsey,
the CEO of the FinTech company block,
they're responsible for a stripe and cash app
amongst some other products,
posted a note on X announcing massive layoffs
at his company.
Let me read you from this note.
Dorsey said, today we're making one of the hardest decisions
in the history of our company.
We're reducing our organization by nearly half
from over 10,000 people to just under 6,000.
That means over 4,000 of you are being asked to leave.
All right, later on, he says the following,
we're not making this decision
because we're in trouble, our business is strong.
Dot, dot, dot, but something has changed.
We're already seeing that the intelligence tools
we're creating and using paired with smaller
and flatter teams are enabling a new way of working
which fundamentally changes what it means
to build and run a company and it's accelerating rapidly.
Can I make a quick aside?
This is like a hint to CEOs.
If you are announcing to lay off a 40% of your staff,
can you use capital letters at the beginning
of your sentences?
I did really caught my attention in this tweet
that he doesn't capitalize any of his words.
I don't know, it feels a little disrespectful.
But let's get back to the actual story here.
The traditional media was quick to embrace
and amplify Dorsey's claim that these layoffs
were because AI made these positions redundant or unnecessary.
Here's the headline, for example,
from a New York Times article about the layoffs,
the headline read, block cuts 40% of its workforce
because of its embrace of AI.
Here's the subhead from that article.
About 4,000 workers will lose their jobs
as the payment company does more work
with new artificial intelligence tools.
It's top executive said.
Another quick aside, because this is a journalistic thing
I began to notice more and more.
I think really starting around the COVID coverage era,
where you have a claim that feels right
that you want to put in your subhead
because there's a point you're trying to make.
But either it's hard to fact check
or you don't want to fact check it
because you're not quite sure what you're going to find.
It'll be complicated, so you just make the claim.
Then you put a comma and attribute it to someone else.
We didn't use to see attributed claims
and subheadlines or headlines,
but we began to see it more.
It's a good way of I'm trying to make a point here
and I don't actually want to go and directly verify
did they lay off all these people because AI tools?
I'll just say, they lay off here because AI tools
said someone.
So you add as a comment.
So just keep in mind that sort of reporting trick.
If we read the article itself,
the framing makes it super clear
what they're implying here.
Here's from the article.
The cuts made as block reported strong financial
results for a most recent quarter
are perhaps the most striking example so far
of a technology company's making plans
to eliminate employees because of AI.
I don't mean to pick on the times
a lot of publications had similar coverage
and the stock price went up 20% for block.
This is an important article to look at
in part because I got sent it a lot of times.
When I get sent an article a lot of times,
that means it is catching people's attention
and is either exciting or upsetting them.
So it's worth some closer scrutiny.
I think there's a general vibe that this article
is trying to verify or validate,
which is the vibe of something big is happening.
Yeah, we've been talking about AI
could get rid of jobs or whatever,
but now it's happening.
See, look, this is the first shoe to drop
of a major crisis.
It's the first company that laid off
almost half of its workforce.
This is the thing we've been warning you about
major economic disruption.
It has begun.
That is a story that is very sticky
and very attention catching.
But is it true?
Well, if you dig a little deeper,
there's a lot of commentators online
who know this industry sector a little bit better
who are not at all convinced.
Let me give you a few bits of contextual information
about block and its layoffs.
Between 2019 and 2025,
blocks employee count grew from around 4,000 employees
to over 10,000.
So they had massive growth during the pandemic.
A lot of this growth actually came from acquisitions
in the crypto and blockchain space earlier in the pandemic
when those things were still hot.
Those acquisitions are now of course floundering
as those technologies,
especially the blockchain based software technologies
are having a hard time.
A lot of their startups are really struggling.
Despite the fact that the times said
that they had quote strong financial results in quote,
if you actually read the industry analyst
who studied the quarterly reports from block,
they're not impressed because the last two quarters
they actually fell short of their earnings target.
So here's an alternative explanation
for what might be going on here.
Like just about every major tech company in America,
block overhired during the pandemic
when that industry was booming.
Also like just about every major tech company right now
in the last two years,
they're shedding jobs to try to right size back
because they had overhired during the pandemic.
We've talked about on this show before,
Amazon doing this, Microsoft is doing this.
This is a common trend in recent years.
But how do we know it really wasn't AI?
AI is the reason why they laid out these 4,000 people.
Well, there's a couple of things going on.
One, a lack of specificity in Dorsey's statement.
He just says like, well, we have these intelligence tools
and then he talks about non-AI things.
Again, we have like different types of teams
and we just, we don't need as many people anymore.
No specific reference of this particular tool
has taken on this role.
So we fire, we shut down this division
because we don't need employees there.
Or in this division, what we did is we laid off
the entire entry-level class
because the managers can now get by with less.
It's very vague what he said.
Two, as we'll hear later in today's episode.
Though there is major changes happening in computer programming
because of new agentic AI tools,
basically every serious commentator
who has studied this industry says, yeah, we're not yet,
we haven't figured out, the companies haven't figured out
exactly what this means.
We're certainly not laying off,
ready to lay off half of our workforce yet.
These tools are very new,
the versions that people are getting excited about.
But maybe the most telling reason why we know this is not AI
is that Ethan Mollick didn't buy this claim.
Ethan Mollick from Penn is a respected AI commentator
who is very much on the booster at site.
He's very AI is going to change everything.
And even he didn't buy this idea
that AI was responsible for the layoffs at block.
On a LinkedIn post, Ethan Mollick said the following,
referring to the layoffs, this isn't about AI.
But that is a smart way to sell it
if you want to see your stock jump 20%.
Then on X, Ethan Mollick said the following
in response to Dorsey's tweet.
Two things, one, given that effective AI tools
are very new and we have little sense
of how to organize work around them.
It is hard to imagine a firm wide sudden 50% efficiency gain.
Two, CEOs with vision who hired well
should also use AI for expansion and augmentation,
not decimation.
I'll just say as an aside, I've been hearing this
from the managers and programmers
I've been talking to in the last couple of weeks
about how to use an agentic programming.
I am much more likely to see the effect to be,
I mean, I haven't had any of them say we're laying people off,
but I have heard a lot of people say
like Mollick implies here, the reaction to these tools
at a lot of these startups has been do more work.
Great, now we can do more work with the same people.
Let's make more money out of the same people,
not let's lay people off.
All right, we have another voice skepticism here.
This one comes from Ron Schelvin, sorry,
who is an industry analyst who specializes
in the FinTech sector.
So he specializes in the sector where block is
and he writes and covers block professionally
as a financial journalist.
He wrote a column right after this.
It was titled The Following.
Block lays off 40% of staff and blames it on AI.
Don't buy the excuse.
And he goes on to say, yeah, they overacquired,
they made some bad acquisitions,
they need the right size.
And they're blaming AI because it sounds better than saying,
yeah, we made some bad calls during the pandemic
and now we have to adjust to it.
All right, so what's the bottom line here
in terms of reality checking this story?
AI will have an impact on jobs.
I'm not one of the skeptics that says this is a fad
that's gonna go away, that this is gonna be like
blockchain based software that really just failed
to catch on.
But we're not really there yet.
Outside of some narrow instances,
that the tools have not matured to the phase
where we really understand what's going on,
where we're really seeing major changes
to the way companies are structuring themselves.
Most of the commentators I can find who follow this closely
say, yeah, sure, there is gonna be things happening with jobs.
We don't know if it's really to expansions or contractions
or what sectors get hit more than yet,
but we're not there yet.
There's a tendency, I think, among coverage right now
to lean into the debt vibe that AI is gonna affect jobs
and try to keep making the claim that's happening right now.
And what's happening is the CEOs of these companies,
especially tech companies, so CEOs like Jack Dorsey
are seeing the tendency towards that vibe reporting
that this is very tempting for journalists.
And so they're trying to,
there's a term any lorry introduced,
I think it was something like AI washing.
They're trying to justify layoffs that are due to things
like pandemic overhiring by saying,
well, AI were being smart.
So they look better, like better decision makers
and like they're more forward thinking.
It's important that we cover AI's impacts on jobs accurately
so that when real impacts come,
we can see them with clear eyes
and react to them honestly
and hold to account the actual,
well, why are you firing these people?
Do we, what's happening here?
What leaders doing this?
Where we really do need to cover that accurately.
So we have to stop the vibe reporting on the AI job apocalypse.
It's not here yet.
And we don't know if it's gonna come at all,
but the best we can do is try to be accurate about what?
We're saying, all right, second story.
This one's kind of a fun one.
All right, so Anthropics CEO, Dario Amade,
famously said in recent, I guess this is all this last year,
famously said that their LLM products
have the intelligence of someone with a doctorate.
Before, like, well, it was a smart as a high school student,
then a smart as a college student.
Now it's as smart as someone with a doctorate.
He described this product, deploying this product
like having an, quote, army of PhDs in, quote,
in your data center.
Last month, he used a related terminology.
He said, we can offer you a country of geniuses
in a data center.
Well, I was thinking about this,
this approach of sort of describing AI
with human education levels.
When I came across an interesting video
that was posted in January,
which did a really cool experiment,
a TA for Cornell University's
freshman computer science course, CS2112.
They probably call it 2112.
This is their sort of advanced freshman fall CS course.
So if you come into the CS program there
as a pretty advanced student,
this would be the course you would take.
But it's for freshmen in their first semester.
It was TA in it.
So he said, here's what I'm gonna do.
I'm gonna take the three leading AI models,
and I'm going to give them every graded thing
we do in this class.
I will give to the models,
and then I will grade their results.
At the same time, I'm grading the real students
in the class using the exact same rubrics.
And then at the end, I will wait the grades,
just treat them like a student in this class
and see how they do.
And I play a quick clip here.
This is the intro to that video.
Can AI pass a first semester freshman CS class?
To answer this question,
I ran every single assignment, every exam, every quiz,
every graded interaction the students got this semester.
Through the three best models,
I could get my hands on from ChatGPT, Claude and Gemini.
That I graded each result with the exact same rubric
we use on students,
so that I could give each AI the most accurate,
possible grade in the class.
All right, so this was a very entertaining video
if you watched a whole thing,
because he goes through specific assignments.
He's like, well, look, this is really cool.
Oh my God, look at this crazy thing it did.
It's well edited.
I thought it was really cool.
In the end, they have a competition in the class
where you create these like critters that evolve,
and they had the AI models critters
compete with the critters from the class.
A couple of things I noticed from the videos,
sometimes these models did very well on assignments.
Sometimes they really struggled.
Sometimes they made very revealing, baffling mistakes,
like in an early assignment,
where they were doing some simple string concatenation,
and the assignment had you write a program
that was going to output the word,
you're going to create a string concatenation,
but basically you're going to output the word hello,
is what it asked you to do on the screen,
and Claude's submission outputted Hello World World.
Because what's going on here is there's a lot of AI assignments
out there, I mean, CS assignments out there
that famously say, hey, right, Hello World
is the first thing you do when you're using
a new programming environment.
And clearly it was just trying to statistically
grow out its answers.
I was like, well, if I'm printing Hello and an assignment,
I got a print Hello World,
then I added another world just to be safe.
But how do they end up grade wise?
Okay, so I have the grades in front of me here.
They used the latest, greatest models
from ChatGPT, Claude and Jim and I,
they actually upgraded during the fall,
they did this last fall.
They were using the very most expensive version
of the Claude LLM available, if you got which one,
and when a new one came out,
they upgraded to that new one.
On some assignments, these things did pretty well,
especially the early assignments.
We got like on the first assignment, ChatGPT got a 102 out of 104,
Claude got a 99 out of 104,
Jim and I got a 101 out of 104.
They also did well on the final exam
because this was an in class final exam
where you're just writing answers, right?
So you just have to use the knowledge in your head.
That's a good setup again for LLMs,
and so like ChatGPT got a 93 out of 100,
Jim and I got an 84.
There's other assignments where they really struggled.
Assignment six, ChatGPT got 32 out of 100,
Claude got 20 out of 100, Jim and I got 13 out of 100,
on assignment five, ChatGPT got 60 out of 100,
Claude got six out of 100, Jim and I got 67 out of 100.
There's a lot of issues it had with hallucinating.
It had a hard time if you watched this video
where the assignment would give you multiple,
some rules for what to do in the assignment,
and it would just sort of skip some of the rules.
Sometimes, I think in the example where Claude got 6 out of 100,
it just kind of made up its own assignment
and solved that one instead.
So it's sort of a mixed bad.
In terms of its final grades,
two of the models, Claude and Jim and I
ended up getting a C plus in the class.
This is a freshman computer science.
You need a 2-5 to declare, in the initial classes,
you need a 2-5 GPA, Cornell,
to declare yourself as a computer science major.
A C plus is like a 2-3 something.
So they weren't doing well enough
to actually even major in computer science.
ChatGPT did better with the B plus.
It was below the median for the class,
but it did somewhat better.
Anyways, here's what's interesting about this.
I mean, there's kind of the catchy thing
is like this is an army of geniuses,
this is a PhD level, whatever.
They're struggling with the first class
you take as a freshman in computer science,
which is the topic that these models are best suited for.
So there's that sort of like gotcha moment,
but that's not really what this is about, right?
Because I'm sure you could get these chat bots
to get you the right answers to these assignments
if you're willing to be sufficiently interactive
and hold their hands and get the prompts
in just the right way and correct them.
That's not really the right way,
the right takeaway here.
I think the right takeaway here was that it was stupid
all along for Dario Amade,
to try to use human education levels
as a way to describe a large language model.
This is just different.
The human brain, we have a general purpose integrated brain
that does lots of things,
and the whole person is educated.
It makes sense to talk about the educated
education level of a person,
but not really a language model.
It turns out a lot of these claims,
like when Dario Amade, I went back and checked this out.
Excuse me, why did he originally say
that there are language models for an LPHD level?
It's because they had the original time
he started saying that,
is that they had given it math problems,
like a problem set and it was doing well
on the math problems from this problem set.
And one of the professors who worked
on creating those problems set said,
those are hard problems.
Those are the type of problems I would assign
to my graduate students.
That's where they originally got the claim
that this is a PhD level, right?
So this idea of just generally talking
about the intelligence level of language models,
I think it's anthropomorphizing and is not useful.
The reality is these are very specialized tools.
They tend to get tuned for specialized purposes,
and to get their real value,
it's a combination of the tool and learning as to human
how best to use and deploy the tool
and check its work and redeploy it
towards that particular goal.
That is a very different tool use scenario.
It's a tool use scenario.
It's very different than imagining
just an anthropomorphized brain
that has a general education level.
So hopefully we can stop using terms
like having to data center full of PhDs.
Also, that was a clever video.
So, you know, kudos to that TA
for putting that together.
It was a hard, it was a hard CS class.
It was definitely harder than the interest CS classes.
I took it dark with,
but it reminds me of the type of classes we had at MIT.
So, you know, it was a hard class.
All right, one final story here.
The story actually comes from me.
Obviously, there's a lot going on
in the last four or five months
with new agentic coding tools
being enthusiastically embraced by computer programmers.
A lot of these viral essays are going around
that just keep, and articles
that are influenced by those essays
and podcasts where people are talking about,
oh my God, huge changes are happening
in the world of computer programming.
This is, and this is really gonna be,
this is like ground zero for the long promise
we're about three years in now.
The long promised claim
that the language model based tools
are gonna have massive disruptions.
But what actually is going on?
I've been trying to find out
as people who subscribe to my newsletter
at kelnewport.com.
No, a week or two ago,
I put out a call for professional computer programmers
to send me detailed reports about exactly
how they and them teams use language model based AI tools
and how this has changed in the recent past.
I have over 350 such reports in so far.
I've carefully made my way through a hundred.
I'm really trying to get my brain around
what's really happening with professional programmers
and these tools.
I thought it would be useful today.
The read you excerpts from two responses
that I think are very typical
of the type of responses I'm reading
that try to give you a better picture
of what exactly does it mean
for these programmers to be using these new tools.
I cut out details in these
and have some illusion to get rid of identifying details.
All right.
So here's my first excerpt.
I'm a software developer working at a tech startup.
Our use of AI varies by person at the company
but my use has skyrocketed
starting in the fall of 2025.
So much so that I don't write any code anymore
but I'm still heavily involved in oversight and architecture.
I use curse or quite a bit last year
but I've moved on to working directly
in the terminal with codex at work.
The workflow goes something like this.
Plan a feature or started discussion
about a bug fix with AI.
Discuss until I'm satisfied.
Have it output a plan, iterate on the plan,
then execute the plan.
After execution, I verify the outcome.
I use Git extensively throughout this process.
Git is a repository software for managing code
that multiple people are working on.
I've tried the multi-agent approach
where multiple agents are working
on different Git work trees at the same time.
I can't do it.
It's too much context switching
and I end up just accepting things.
I wouldn't normally accept
because it's an exhausting process.
The quality dips dramatically.
I love my current workflow.
I've developed things in the past week
that would have taken me months before.
All right.
Let's pause there before I do the second excerpt.
This, I would say, is very typical
of what I would call the enthusiastic all-in user
from among the subset of professional programmers.
Most of the code they're producing
is now actually being generated
by an AI-agentic tool.
Typically, it is Clawed Code
where they switched the model behind it.
I don't know if it was Opus to Sonnet or Sonnet
to Opus in the fall.
And that really seemed to be make it good enough now
that a lot of people wanted to use it.
Though I would say I also see chat GPT codex
is also commonly used.
But an interesting thing about this,
or I want to point out two things.
One, there's a lot of just chatbot discussion
happening in these workflows.
Remember, you talked about making a plan,
iterating on the plan.
That's all actually like chatbot interaction.
So sort of related to using these tools
to produce more code,
these programmers have entered a more interactive way.
They want to talk back and forth.
It reminds me a lot of the research I did for the New Yorker
about how students using chatbots to write paper.
They find talking back and forth of the chatbot
as they write is less straining.
So that's picking up here.
But also notice this programmer is not really big
on the multi-agentic approach,
which is what you see most often told
in the sort of breathless online articles
and YouTube videos is this idea of,
I have 20 agents working at the same time
and this agent checks this agent
and there's a supervising agent
that looks at those agents.
And then it reports over here to the hierarchy agent
and then that agent is on open claw
so that it can send recommendations to my YouTube channel
and then make sure that it pays
these super complicated trees of different agents,
supervising other agents.
You really aren't seeing that at least in my study here.
It's you're not seeing a ton of that
in professional programmers.
You tend to see it more in people
who are like working on their own personal bespoke projects
and find it really fun.
But I don't see as much and that's what we saw reflected here.
All right, let me read you one other typical excerpt here
from a real professional programmer.
I think this captures well another very common type
of response which is a little bit more reticent
but still appreciating the power of these new tools.
Let me read this.
I'm a software developer working at a tech startup.
Our use of AI varies by person at the company
but my use has skyrocketed starting in the fall of 2025.
Oh wait, that was the last one.
I'm sorry, this is the new one.
I don't want to just reread the last one.
All right, I'm like a language model here
which is sort of randomly hallucinating the same answer twice.
No, no, here's the real second excerpt.
I'm a staff software engineer at a tech startup.
The AI models have made the easiest tasks even easier.
Scaffolding a solution, boilerplate code,
replacing variables or moving in import.
Repetitive tasks are good candidates.
LLMs are also useful as a way
to quickly investigate the documentation of a tool
or get a reminder on syntax for something I'm trying to do.
But the easy stuff, the task that AI can do well,
was never the hardest, nor most time-consuming part of my job.
When actively using these coding agents,
I found that it generally slows me down.
Using them introduced tasks I didn't have before,
composing a prompt, checking the output,
reprompt, manually refactor when it isn't quite right.
It also slows down the code review process.
A much more detailed in my reviews
when I know a coworker use an LLM
to generate some or all of the code.
That's also a very common response as well
that's pointing out this idea,
which I think is a fair criticism,
that the people like our first excerpt,
which is doing most of their code generation
with the agent to AI, like this is saving so much time.
They're noting the more reticent users are noticing,
you're down playing the huge amount of time
that now surrounds.
Yeah, you don't write the code yourself, that's faster.
But now you have to do so much other work,
all of this iteration with the model and the prompts
and try the prompt again and work on your agent.
Markdown file and your skills harness.
And then all of the review on the other side,
and if it was produced with AI,
you really have to review it.
And he's like, there's all of this other work
that's surrounding this workflow,
which is none of it's very fun.
I mean, and this is taking a lot of time.
Are we sure?
Are we sure that this is actually producing the best code?
So there's sort of this tension going on
in the computer programming world.
Here's a takeaway from this.
One, agentic coding tools,
past a threshold of usefulness
with the Cloud Codex update in the fall
that has made them much more heavily used.
It might survey something like 45% of the people I talk to
are now producing the majority of their code
with an agentic tool such as Cloud Code.
All right, two, it's really unclear exactly
what the best practices are for this are.
There seems to be a spectrum of enthusiasm
of the users of it in the space.
For sure, on one end,
there's way too much AI interaction going on.
This can't be the most efficient way to do it.
On the other end, there's a lot of reticence.
The reality is going to fall somewhere in the middle.
We don't yet know what the future computer programming
looks like.
I think by the summer there's going to be some best practices.
They'll have some clever acronyms to go with them.
There'll be some best practices about how best to use these.
There will be automatic code production.
I think we're going to pull back a little bit
on how much AI chatbot should be involved in review
as well as planning.
I think that's a little bit of just enthusiasm there.
I do think a lot of code will still be generated,
but will be better at where we deploy the code.
I think there'll be more standardization
about planning and architecture documents, et cetera,
which will have a high overhead at first,
but it'll allow us to deploy these tools better.
I do not think based on these interviews
that the hyper multi-agent approach
that we see most talked on the internet
is going to become some sort of standard
for serious programmers in most places.
And the vibe coding, like you see talked about a lot.
Give me this app, but I come back a week later and it's done.
That really is in the realm of like hobbyist
and apps for personal apps for yourself
or people who are doing experiments.
None of the serious programmers I heard of so far
are doing anything like that for the most part.
All right, so there's a lot to be done here,
but what I'm trying to do is why is reality check?
I am not interested in breathless accounts
of what's happening online because that's engagement hunting.
I'm not interested in hearing sort of like
non-technical reporters who have just heard
a lot of those accounts and then are like,
look, I don't know the details,
but I think we can all agree that like
there's not gonna be programmers in the future.
I think we gotta talk to real programmers.
What is really going on?
Something is happening.
It's more complicated than other people make it seem.
Let's keep listed.
I'll read you some more of these reports in weeks ahead.
Let's figure out the old fashioned way.
Turn every page, learn what's going on,
what's working, what's not, what's hype, what's not,
and let's try to figure out what's actually happening.
I think we will and we'll get on it,
especially if you follow me here.
All right, that's all the time I have for today.
Remember, take AI seriously,
but not necessarily everything you hear about it.
I'll be back on Monday with the main episode
and hope they'll do another one of these next Thursday.
See you then.
Deep Questions with Cal Newport


