Loading...
Loading...

A new NBER study argues the real risk from AI isn’t which jobs are exposed, but which workers lack the savings, transferable skills, mobility, and age advantage to adapt when disruption hits. While many highly exposed professionals appear relatively resilient, a smaller and more vulnerable group—disproportionately women in clerical and administrative roles—faces the greatest danger, suggesting policy should focus less on abstract job loss and more on rapid, targeted support for those least able to adjust. In the headlines: OpenAI pledges community-focused data center investments, the White House pushes an emergency power auction to address rising electricity costs, and Davos leaders debate whether AI disruption may outpace society’s ability to respond.
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Section - Build an AI workforce at scale - https://www.sectionai.com/
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, a new study that looks at who is best suited to deal with
AI-driven job displacement, and before that in the headlines, OpenAI joins Microsoft
in making new commitments to the communities in which they're building out AI infrastructure.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, Zencoder, robots and pencils and
super intelligent, to get an ad-free version of the show, go to patreon.com, such AI Daily
Brief, or you can subscribe on Apple Podcasts, and if you are interested in sponsoring the
show, send us a note at sponsors at aidelebrief.ai.
Lastly, if you want to up your skills, it is not too late to join our New Year's AI
resolution.
We've got over 5,000 people participating now.
You can find all about it at aidebenewyer.com.
Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in
around 5 minutes.
We kick off today with the latest AI company to commit to making sure their data centers
are good neighbors.
Recently, we've been tracking what hopefully becomes a wave of commitments from companies
that are building these data centers to ensure that there aren't negative externalities
for the communities in which those data centers are located in.
If you are a regular listener, you will know that I think this is coming too late, but
I am glad to see it happening.
Frankly, I think that we should aspire not just to not being disruptive, but to actually
being a positive partner.
The latest company to make these commitments is OpenAI, who in a blog post introducing
the new initiative, which they call Stargate Community, they wrote, across all our Stargate
Community plans we commit to paying our own way on energy so that our operations don't
increase your electricity prices.
They noted that every community will require efforts tailored to their unique conditions,
but OpenAI said that their plans could include bringing their own power resources or paying
for local grid upgrades.
Turning to water use, OpenAI said their impact would be minimized by using modern closed
loop or low water cooling systems.
They wrote that these are, quote, innovations in cooling water systems designed that drastically
reduce the water use compared to traditional data centers.
Water required by our facilities should be a fraction of the communities overall water
use.
For their first data center in Abelene, Texas, they quoted the local mayor stating that
a year's worth of water use for the data center would be half as much as the county uses
in a single day.
In addition, OpenAI is committing to regional workforce development in the communities
around their data centers by establishing OpenAI academies.
They said this initiative would include credentialing and clear pathways to high quality jobs,
aligned to local employers, and the regional AI industry.
The company said that they would also engage with local labor unions and other partners
to support the skilled trades required to build and operate the facilities.
The post concludes, Stargate is a physical infrastructure program that requires deep partnership.
We are reliant on and grateful to the communities that make it possible and we're committed
to showing up as long-term partners.
In a somewhat related story, the White House is pushing for an emergency power auction
in the Northeast to deal with spiraling energy costs.
Last Friday, the administration published a plan to require tech companies to indirectly
fund the construction of new power plants.
In agreement with multiple state governors, the White House intends to compel the nation's
largest grid operator, PJM Interconnection, to hold an emergency wholesale power auction
later this year.
The auction would allow tech companies to bid on 15-year contracts for new electricity
generation capacity.
The goal is to provide PJM with the certainty they require to accelerate construction while
also moving the cost burden onto the tech companies.
As the regulations currently stand, expansion is largely financed by increasing rates on existing
customers.
New power contracts help, but until now, they've only been offered for 12-month terms.
PJM services more than 67 million people stretching from the Northeast all the way to parts
of the Midwest.
They are currently forecasting a 17% jump in peak demand across their system by 2030.
At the involvement of bipartisan governors is noteworthy, as electricity has become a
major issue in multiple local elections.
Democrat Pennsylvania Governor Josh Shapiro claimed that PJM has been, quote, slow to let
new generation onto the grid at a time when energy demand is going up.
Shapiro is up for re-election in November and energy costs have been one of the key issues
on the campaign trail.
The question ultimately will be whether an emergency auction will bring meaningful relief
for consumers given the long lead times for new power plants.
The long-term nature of the plan is absolutely a welcome change, but who knows if it will
be enough to defuse the issue by November.
Now moving back to open AI, in addition to Stargate Communities, the company also announced
their new education for country's program.
They wrote about the need for such a program in the context of the capability overhang which
we covered earlier this week.
They wrote, education systems are a critical route through which this gap is closed.
Studies project that by 2030, nearly 40% of the core skills workers rely on today will
change, driven largely by AI.
By embedding AI tools, training and research into the core infrastructure of schools and
universities, education systems can evolve alongside these shifts and better prepare students
to thrive in a world with AI.
The program will see open AI work with foreign governments and universities to bring AI into
education systems, and in addition, open AI will provide tailored training through their
open AI Academy and certification system.
The first cohort of partner countries includes Estonia, Greece, Italy, Jordan, Kazakhstan,
Slovakia, Trinidad, and Tobago, and the UAE.
Google is also doubling down on education as a major focus for their AI organization.
In collaboration with the Princeton Review, Gemini can now serve free, full-length practice
exams on demand.
The feature will begin with practice SATs, with Gemini providing instant feedback for students.
In addition, Google has awarded half a million dollars in funding to Cal State Fullerton
to support AI literacy training for educators.
Associate Professor Bridget Drewkin said, when teachers understand how AI systems work,
including how to build, evaluate, and use them thoughtfully and responsibly, they can
guide students in asking good questions about technology rather than just consuming it.
Now as you can tell, all of these headlines are sort of all part of a larger story, which
is society's adaptation to AI.
And that of course has been a major topic of conversation at Davos this week as well.
The latest to comment on that is Microsoft CEO Satya Nadella, who warned at Davos that
AI risks losing public support if the technology doesn't deliver clear benefits to everyday
people.
In an interview from the WEF, he said, we as a global community have to get to a point
where we're using this to do something useful that changes the outcomes of people in
communities and countries and industries.
Otherwise I don't think this makes much sense.
In fact, I would say we will quickly lose even the social permission to actually take
something like energy, which is a scarce resource, and use it to generate these tokens.
Discussing AI job disruption, Nadella rejected the idea that this is something external
happening to society beyond intervention, commenting, going and thinking of these is somehow
living outside of the realm of human agency is probably not the right way to think about
it.
He's also not super convinced that AI leads to a world with no human work required.
In fact, he believes the opposite, comparing this moment to the adoption of the personal
computer.
Nadella said, in the early 80s, if someone had come to us and said that 4 billion people
are going to wake up every morning and start typing, you would have said, why?
We have a type of tool and that's good enough.
We don't need 4 billion people.
Nadella is also a firm believer in Javon's paradox when it comes to AI.
This axiom implies that cheaper AI will drive up demand rather than lead to a market crash.
He said, if you buy my entire argument that we've got a new commodity, it's tokens,
and the job of every economy and every firm in the economy is to translate these tokens
into economic growth, then if you have a cheaper commodity, it's better.
Overall, Nadella's view is that AI is a technology that will become deeply ingrained
in society and the economy, rather than an external actor disrupting human flourishing.
He said, all of these token factories are part of the real economy connected to the
grid connected to the telco network.
That's what's going to drive at scale, whether it's the global South or in the developed world.
To others who commented on the speed of AI disruption at Davos,
where JP Morgan CEO, Jamie Dimon, and Nvidia CEO, Jensen Huang.
In a very frank and stark discussion at Davos,
Dimon said that companies and governments cannot ignore AI or put their head in the sand.
He said, it is what it is.
We're going to deploy it.
Will it eliminate jobs?
Yes.
Will it change jobs?
Yes.
Will it add some jobs?
Probably.
It is what it is and you can hope for the world you want, but you're going to get the world you've got.
He added, your competitors are going to use it and countries are going to use it.
However, it may go too fast for society.
And if it goes too fast for society,
that's where governments and businesses need to in a collaborative way,
step in together and come up with a way to retrain people and move it over time.
Diving gave the example of the two million truck drivers in America
that are likely to be pushed out by autonomous driving in the medium to long-term.
He said, should you do it all at once?
Have two million people go from driving a truck and making $150,000 a year
to a next job that might be $25,000?
No, you will have a civil unrest, so phase it in.
Now, playing his role at Optimist in Chief,
Nvidia CEO Jensen Huang argued that labor shortages rather than mass layoffs
were going to be the issue that society needs to face.
He said, energy is creating jobs.
The chip industry is creating jobs.
The infrastructure layer is creating jobs, jobs, jobs, jobs.
This is the largest infrastructure build out in human history.
That's going to create a lot of jobs.
He also reiterated a point that he's made before
that this particular tech revolution was actually creating a ton of opportunity
in the physical trades.
He said, it's wonderful that the jobs are related to trade craft
and we're going to have plumbers and electricians and construction and steel workers.
In the United States, we're seeing a significant boom in this area.
Everyone should be able to make a great living.
You don't need a degree in computer science to do so.
And while some people were quick to argue that the jobs that Jensen is talking about
are very temporary and will only last until data centers are built,
others from very different walks of life embrace the message.
Mike Roe, who you might recognize from the dirty job show,
talked about this discussion between Jensen and Larry Fink
and said, I couldn't make it Davos this year,
but I'm delighted to see that my message has.
Obviously, our workforce is nowhere near ready for what's coming.
In fact, we're not ready for what's already here.
We're going to need to dramatically rethink the way we train the men and women
who will build the infrastructure in question and the speed with which we do so.
I'm heartened and encouraged to see Silicon Valley at the table.
Now, of course, this debate is going to continue.
And in fact, AI disruption is also the topic of our main episode.
So that is going to do it for the headlines.
And to that other topic, we will now move.
Sure, there's hype about AI,
but KPMG is turning AI potential into business value.
They've embedded AI in agents across their entire enterprise
to boost efficiency, improve quality,
and create better experiences for clients and employees.
KPMG has done it themselves.
Now, they can help you do the same.
Discover how their journey can accelerate yours at www.kPMG.us slash agents.
That's www.kPMG.us slash agents.
If you're using AI to code, ask yourself,
are you building software or are you just playing prompt roulette?
We know that unstructured prompting works at first,
but eventually it leads to AI slop and technical debt.
Enter ZenFlow.
ZenFlow takes you from vibe coding to AI first engineering.
It's the first AI orchestration layer that brings discipline to the chaos.
It transforms freeform prompting into spec-driven workflows
and multi-agent verification,
where agents actually cross-check each other to prevent drift.
You can even command a fleet of parallel agents
to implement features and fix bugs simultaneously.
We've seen teams accelerate delivery to x to 10x.
Stop gambling with prompts.
Start orchestrating your AI.
Turn raw speed into reliable production-grade output at ZenFlow.free.
Most companies don't struggle with ideas.
They struggle with turning them into real AI systems that deliver value.
Robots and pencils is a company built to close that gap.
They design and deliver intelligent cloud-native systems
powered by generative and agentic AI
with focus, speed, and clear outcomes.
Robots and pencils works in small, high-impact pods.
Engineers, strategists, designers, and applied AI specialists
working together to move from idea to production
without unnecessary friction.
Powered by RoboWorks, their Identic Acceleration platform,
teams deliver meaningful results, including initial launches
in as little as 45 days depending on scope.
If your organization is ready to move faster,
reduce complexity, and turn AI ambition into real results,
Robots and pencils is built for that moment.
Start the conversation at robotsandpensals.com slash
AI Daily Brief.
That's robotsandpensals.com slash AI Daily Brief.
Robots and pencils, impact at velocity.
Today's episode is brought to you
by my company's super intelligent.
In 2026, one of the key themes in Enterprise AI,
if not the key theme, is going to be
how good is the infrastructure into which you are
putting AI in agents.
Super intelligence agent readiness audits
are specifically designed to help you figure out one,
where and how AI in agents can maximize business impact
for you, and two, what you need to do to set up your organization
to be best able to leverage those new gains.
If you want to truly take advantage
of how AI in agents can not only enhance productivity,
but actually fundamentally change outcomes in measurable ways
in your business this year, go to bsuper.ai.
Welcome back to the AI Daily Brief.
Very clearly, one of the big topics heading into 2026
is AI-related job disruption.
It has been a major topic at the World Economic Forum at Davos.
It's something that is clearly on the minds
of people across the US, and especially in an election year
seems like it could start to become a political issue as well.
Now, one of the things that we've seen with some frequency
is studies that try to measure the amount of exposure
that different jobs or professions have to AI disruption.
In other words, which jobs are most likely
to be disrupted versus which jobs are least likely
to change and be disrupted?
A new group from the National Bureau of Economic Research
argues that that is actually missing one of the key questions.
We need to not only know the authors suggest
which roles are most susceptible to disruption,
but how adaptable different categories of workers are
if that job displacement should come for them.
To figure this out, the study creates
a novel measure that they call adaptive capacity.
Adaptive capacity includes a few different factors.
The first is liquid financial resources.
This is perhaps obvious, but the authors
write, workers with greater savings,
whether economic storms more effectively.
They point to a 2008 study that shows
that individuals with greater liquid savings
are less financially distressed after job loss
and take longer to find better matching jobs.
Low-wealth individuals, on the other hand,
are sometimes forced to take whatever they can get,
leading to lower quality employment.
The next factor of adaptive capacity is age.
They point to a 2017 study that showed
that workers aged 55 to 64 who experienced job loss
during the Great Recession were significantly less likely
than those who were aged 35 to 44
to find new employment afterwards.
And the difference was about 16 percentage points.
Everything from retraining to relocation
to switching occupations is more difficult
for that older cohort.
This leads overall to job loss for older workers,
leading to greater earnings losses
and lower employment rates overall.
A third factor that the authors include
is geographic density.
And once again, this might seem a little bit obvious,
but they point to a 2012 study that found
that workers who were in more densely populated areas,
basically think big cities had less challenge-making
work transitions compared to those
who were in comparatively low-density areas.
And intuitively, again, this makes sense.
More densely populated areas are going to have more jobs,
which means more opportunities for those who lose their jobs.
The last factor that they consider
is skill transferability.
When a person has skills that can be applied
across many different jobs,
that creates more occupational mobility
than if you have a highly specialized skill set.
Once again, the authors point to a 2016 study
this time showing that individuals
with higher skills transferability
had smaller earnings losses after displacement.
The authors acknowledged that there are some other things
that could impact adaptability such as income
and union representation,
but they argued that based on the literature
in previous studies, those were less conclusively linked
to better outcomes,
and so decided to focus on these four areas.
The authors then combined a set of six primary data sets
to create a composite measure
of adaptive capacity by occupation.
They looked across something like 350 jobs representing
about 96% of American employment.
From there, they were able to group people
into four different categories.
Basically, they looked at the adaptive capacity index,
this new measure that they were creating,
and an AI exposure index,
about how susceptible the disruption
their profession was.
One quadrant then, in many ways,
the most desirable at least based in the terms of this study
were jobs that had both high adaptive capacity
as well as low vulnerability.
On the other end of the spectrum,
were jobs with low adaptive capacity and high vulnerability.
In the middle, of course, were roles
that have high vulnerability,
but also high adaptive capacity,
or low adaptive capacity, but also low vulnerability.
Summing up their findings, the authors write,
on average, highly AI exposed workers appear well equipped
to handle job transitions
relative to the rest of the workforce,
yet 6.1 million workers still face both high exposure
and low adaptive capacity.
Basically, they found that the quadrant of workers
who had high vulnerability to AI disruption,
but also a high adaptive capacity,
represented around 26.5 million workers.
This included occupations such as software developers,
financial managers, lawyers,
and in their words, other professions that benefit
from strong pay, financial buffers,
diverse skills, and deep professional networks.
Given that the authors write,
these well-positioned workers who observers often cite
as being highly threatened by AI automation,
likely possessed relatively strong means
to adjust to AI-driven dislocation if it were to occur.
The group instead that they are most concerned with
is these 6.1 million workers who face both high exposure
to AI disruption and low adaptive capacity
to manage a new job transition.
The authors write, many of these workers occupy
administrative and clerical jobs
where savings are modest,
worker skill transferability is limited,
and re-employment prospects are narrower.
Meaning, of course, that if those folks experience
AI-related job loss,
they're likely to be at risk of lower re-employment rates,
longer job searches,
and more significant relative earnings losses
as compared to others.
Now, one of the critical findings is that among this group
with high exposure and low adaptive capacity,
86% of these most vulnerable workers are women,
whereas some of the occupations
like software developers, financial analysts,
and web developers and marketing managers
have high exposure,
they also have diverse skills portfolios,
they tend to work in dense metro areas,
and they have liquid net worth
that can be in the hundreds of thousands.
There are also geographic patterns in the vulnerability.
The authors find that vulnerability is concentrated
in places like college towns and state capitals
that have lots of administrative positions
that are supporting institutions.
Places like Laramie, Wyoming,
Stillwater, Oklahoma, Springfield, Illinois,
and Carson City, Nevada
have something like five to seven percent
of their local workforce being
in this high vulnerability category.
Now, I think this is super interesting analysis,
but one of the things that stands out to me
is that when it comes to measuring adaptability,
a lot of the measures that they're using,
presuppose a world where there are a similar number
of other jobs to adapt to.
In other words, my concern is that it might be underestimating
the structural changes to work.
Now, the authors do note this,
they acknowledge in their limitation section,
somewhat in passing,
that if AI quote fundamentally reshapes the economy,
these historical relationships may not hold.
But I think that that caveat
understates the problem.
Every component of the adaptive capacity index
is calibrated in a world where displacement
is localized and destination jobs exist.
It basically models AI disruption
as a larger version of a plant closure or a trade shock,
discrete localized events where affected workers
transition into an otherwise stable economy.
Pretty much all the historical evidence they cite
describes exactly that type of scenario.
AI of course could work differently.
If it affects cognitive task categories
rather than specific firms or industries,
you could see things like simultaneous pressure
across related occupations.
The secretary, customer service rep,
insurance claims processor,
and office clerk all face exposure at once,
meaning they can't absorb each other's displaced workers.
We could also see some pretty significant shifts
in skill complementarity.
The framework assumes skills are either transferable or not,
but AI might make some skills radically more valuable
while devaluing others entirely.
Transferability becomes a moving target.
The question is basically,
what if there's no adaptive capacity
that prepares you for a world where the category
of work you do is being structurally reduced
rather than shifted?
The framework that the authors provide
can tell you that a 58 year old medical secretarian
Springfield Illinois with 3000 in savings
is going to struggle more than a 32 year old software
developer in Seattle with 200,000 in liquid assets.
What it can't tell you is what happens
if there's simply less demand for human cognitive labor
and aggregate.
And so does that mean this isn't useful?
I would argue that no, it is still in fact useful
for a very specific reason.
Effectively what these authors are helping provide
is triage policy during a transition.
Even if there is total structural disruption,
human and institutional inertia
is likely to draw it out over some time.
And net net, what this research can help show
is that the most vulnerable groups identified here
might need the fastest and most direct policy response
even as we figure out the full extent of the disruption.
There is a strong argument to be made in other words
that whatever the end point looks like,
whether it's structural transformation,
a more modest acceleration of existing automation trends,
and whatever happens on job creation on the other side,
the sequencing of the disruption
will likely follow something like the vulnerability gradient
that this research identifies.
If the conventional framing of this would have been
will help these workers transition to growing occupations,
the more honest version might be,
we don't know what the labor market looks like in 10 years,
but we do know that these workers
will face income disruption first,
have the least capacity to self-insure
and are concentrated enough geographically
to reach efficiently.
We should get resources to them fast
while we figure out the rest.
A lot of what you are hearing me advocate for on this show
is less fanciful imagined discussion
and much more direct and discreet policy discussion.
And I think that the findings in this study
can contribute to exactly
that sort of discreet policy analysis.
I will link to the study in the show notes
so you can go check it out for yourself,
but for now that is gonna do it for the AI Daily Brief.
I appreciate you listening or watching as always,
and until next time, peace.
Peace out.

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis