Loading...
Loading...

Stephanie Flanders sits down with Nobel Prize–winning economist Daron Acemoglu to unpack one of the most urgent questions facing the global economy: how is artificial intelligence changing the future of work, and what are the potentially dire consequences for society and democracy?
See omnystudio.com/listener for privacy information.
Bloomberg Audio Studios, podcasts, radio, news.
On our current path, this handful of people will decide what the future of AI is, and
the best way to counter balance that is to have a vision that's different and hopefully
better for society.
Welcome to Trumponomics, the podcast that looks at the economic world of Donald Trump.
How he's already shaped the global economy and what on earth is going to happen next.
While we talk about the big forces affecting our economy and the broader world on this show,
and there's no bigotopic in economic policy and general discussion these days, then the
impact of AI.
We've talked at different times about the consequences for jobs, inflation, interest rates,
and where the policy makers let lone ordinary people are ready for any of it.
Well my guess this week, Darren Asimoglu, it famously takes the long view on these matters,
a recipient of the Nobel Prize for Economics in 2024.
He probably wrote the single most widely read book of economic history of recent times,
why nations fail.
And more recently, he wrote with Simon Johnson, power and progress, our thousand years
struggle over technology and prosperity.
And I talked to him about that book and the lessons for this AI revolution and while
back in the summer of 2023, but given everything that's been going on, I wanted to have him
back to see whether he saw anything in the new waves of AI that we've had since 2023,
particularly in the last few months, whether he'd seen anything to make him change his
view on either how fast this technology is going to change our economy or how well placed
we are to get the best out of it.
Darren, thanks very much for coming back onto this podcast.
Thank you, Stephanie, it's my pleasure to be here.
We will get into some of the sort of key dimensions of this in a minute, but I should just get a
sense from you.
I mean, we're talking now mid-March 2026, there's been so much chatter and I suspect many
people listening have had their own real life experience now of the development of all
the different forms of AI and particularly the sort of agentech AI that we talk about.
Are you in a kind of very broad sense reassessing how fast or how fundamentally this is going
to change our world?
Yeah, every day, I think the underlying technology is changing faster than what I would
have predicted, what many would have predicted a year and a half ago.
So especially with the recent developments in agentech AI, especially led by Anthropic,
there is a real possibility that these tools can be broadly useful in what people do.
There is still a lot of uncertainty, however, first we are not seeing any of the pre-packaged
easy to use reliable applications, think of it as the Microsoft words or Microsoft offices
of AI that can be used across a broad range of occupations or in some specific occupations.
Those are not around yet.
There is still uncertainty about whether there will be bottlenecks in reaching higher
reliability and higher judgment.
There is every evidence that there is a lot of rapid progress, but there are some weaknesses
in these models that are still persistent and I don't just mean hallucinations, but lack
of a deep understanding.
They don't seem to have a conceptual framework, they don't understand the context and they
cannot reason at multiple levels of abstraction about a problem yet.
So those may be overcome and I think many of those are going to be important in dealing
with edge cases in many occupations.
So wholesale automation of occupations is still not something we are going to see right
away, but some people swear that we are going to see it in one year or two years, three
years.
So there is a lot of uncertainty, but let me make one thing clear.
If we do not up our game about both how we regulate these models and how we actually
develop them, there could be a huge amount of damage to society.
I think that is very helpful because there are two elements of this where there is obviously
as you say, there is a lot of uncertainty and there is a wide range of opinion.
If possible I am going to try and separate them, but obviously they merge into each other.
One is this question of the pace of change.
How fast a company is really going to be able to change their practices, capture those
productivity improvements.
And then the second is, which you have just highlighted, is how well are we positioned
to make the best of this, not just to get all the productivity growth, but to make sure
it is actually positive for most of the population, not just a few.
You highlighted in the 2023 book, none of that was automatic in the case of the Industrial
Revolution and we may not have to do it much faster this time.
Just focusing on this speed question, there is a trini report which there has been a
little mini industry in debunking this research report which went viral because it captured
some elements of this sort of faster pace that we were seeing.
I think you wrote a paper, the basic macroeconomics of AI, in the debate I would say that you are
a fairly low key about the pace of change, the extent of change in any given year.
I think you said at most a few percent over 10 years, so maybe even a fraction of a
percent of productivity growth, overall productivity growth a year, would you stand by that basic
assessment today, or do you think maybe the gains, just the pure productivity gains
could be a bit faster?
I think they could be a bit faster, there has been faster change of the capabilities
of the foundation models, however, it would still require some big breakthroughs, especially
at the application layer.
The bottom line of that paper was to point out how we can get a fairly simple understanding
of the constituent parts of the contribution of AI to productivity and GDP growth.
That comes from realizing that the GDP contribution of AI is nothing other than what fraction
of tasks are going to be taken over or completely transformed by AI in the economy, times
the average productivity gain or average cost savings in these tasks.
That's the calculation that I did with the available evidence in 2022, but even then
a lot of people took issue at how I interpreted the data, etc.
So you could boost some of the numbers that I have, which were that about 5 percent of
the whole economy will be taken by AI within 10 years or by 2030 or thereabouts, and that
would lead to about 20, 25 percent cost savings or relative to labor costs that firms
used to spend on the same tasks.
Now, you can boost my numbers by increasing either or both of these quantities, so you
can say, no, no, it's not 5 percent.
It's going to be 20 percent of the economy that AI is going to take over, in which case
you would quadruple my numbers, or you could say it's not 20 percent cost savings, but
it's going to lead to 30 percent or 40 percent cost savings.
After all, you know, comp lead to 300 percent cost savings because labor wasn't very expensive
anyway.
But you see the elbow room to do that kind of thing where you're not going to come up with
revolutionary numbers here.
And part of the problem here is that we are right now, and that was the case two years
ago, and continues to be the case, we are right now focusing on AI as an automation tool,
as a tool to replace workers.
That's not the best way of using AI.
The best way of using AI is to try to complement workers so that they can do new things.
They can perform new tasks, they can increase their sophistication level, and also respond
to challenges in the world economy from globalization, from aging, from climate change, by creating
new goods and services, new organizations, and so on.
If we do that, I think I would be more optimistic about the future of AI, and it's one of the
aspects of the wisdom gap that we have right now.
We don't know how to regulate existing models, and we are not really focusing on what we
can do best with these models, and that's both for productivity and social consequences.
So I've just made the productivity case that we could actually get better productivity
consequences, but actually the social consequences are even darker.
If we displace people, if we say displace 20% of the population from their jobs, and they
remain unemployed, or they go to lower quality, lower pay jobs, our democracy is not going
to survive.
We're already struggling to make our democratic system work.
We're not doing that well already.
And if we put another huge shock on top of that, I'm not very optimistic.
So beware.
And just thinking about the economics of what you're saying, and also thinking about
what capture people's imaginations about that Citrini report, they say themselves,
this is a thought experiment, there was something kind of gripping about the fact that it
was claiming to be a memo written in 2028, it was basically the claim was you would have
an extraordinary amount of change in business models in a very short period of time.
I assume you would say given real world frictions and just the way things tend to happen, that
a two year time frame is very unrealistic.
But there was another basic assumption built into that that the change that we will most
immediately see and will have the biggest impact will be simply to replace labor, not
to augment it.
And that to the extent that it's creating other stuff, that's going to be far outweighed
by the job destruction.
Even if you don't accept the time frame, do you think there is an emphasis on replacement
relative to augmentation or is it a genticae, I really both things.
I don't know, it's an open question, but my bet would be on the Citrini side, not on
the time frame, but on the path that we are following.
There is so little that these companies are doing in order to understand what work humans
do and try to be useful to humans.
The whole agenda of all of the leading companies in the United States and now joined by Deepseek
in China is AGI, Artificial General Intelligence.
That is a banner for saying these models are going to do everything better than humans,
which of course then leads to the corollary that a lot of companies who just throw away
their humans and use these companies.
That is an automation agenda.
That is exactly what Citrini banked on.
Now, they then made a number of other assumptions and steps about how that would work out.
What its consequences would be, how quickly those would be, those I don't agree with.
But credit to them, they said, this was a scenario, they weren't even making a prediction.
I don't know why the markets went haywire, given that there was no new information in
there.
Everything that was in the Citrini report has been said and they themselves said, there
is no research here, that's original.
But we are living in such fragile times in everything.
The valuation of these companies is all based on very fragile assumptions about what they're
going to be able to achieve in the future.
If you look at the amount that they're spending and their valuation, this can only be justified.
If they make something like a trillion dollar revenues in the foreseeable future as an AI
industry, that's just incredible.
How are you going to get to a trillion dollar?
They're hardly struggling to make a couple of billion dollars right now as a whole industry.
So there is something, you know, glass in this house.
Okay, I have to say that I'm slightly depressed by that answer because I thought you were
going to push back more heavily against the Citrini assessment though.
I absolutely know that you're concerned about our capacity to escape with this.
But I thought I was interested in it.
Let me give you the pushback as well.
That's the point I want to make throughout.
There is the potential to use AI, not for automation.
That's what I keep emphasizing.
But I also want to push very hard against the assumption that either we are going there
already, no, we're not, or that we can get there automatically, no, we cannot because
all of these companies have this business model of just let's replace all the workers.
They haven't even put into their calculations much of a revenue stream that they can get
from complementing workers because that's just a very difficult thing to monetize.
So I think that's where our wisdom gap is.
We are not even wisely thinking about what we should be doing with these very capable
models and the industry is going in its own direction.
You have just done a paper with two of your colleagues for Brookings that is trying
to give some concrete advice to policymakers in this area to answer.
Specifically that question.
I do want to get to that.
I just want to quickly, one of the things just to make sure coming out of this conversation,
one of the things we see a lot in this is, and particularly if you look at the research
studies in this area, they tend to look at the range of occupations and talk about their
degree of exposure, quote unquote, to AI.
And there's a whole range, I think in your original assessment and a lot of people use
this, they sort of thought it was about 20% of the occupations.
And obviously some people have high numbers.
If one's thinking about different kinds of AI and the potential of different AI, how
should we read those?
Are they just going on your way that businesses is looking at it now as you pointed out?
Is very much as a labor replacement technology.
Should we see that as exposure to replacement or should we see something more sophisticated?
Quite a set of questions.
There are really three questions you're asking here, Stephanie.
Let me answer each one of them in turn.
One is, what does this AI exposure mean?
And in general, it is an ill-defined concept because you could be exposed to AI because
you're going to lose your job with AI or you could be exposed to AI because you could
use AI to increase your contribution to your job.
And we see that in the company's exposure as well and investors can't decide the difference
between those two either.
What are percent?
That's why they fluctuate between devaluing software companies and giving them a huge boost.
So that's the first problem with AI exposure.
So when I wrote my paper, I took a position similar to the Citroenter report and I said,
right now we're going towards automation, so let me focus on that.
Second, where does that 20% number come from?
So roughly speaking, think of it this way.
Right now, and I think in the near future, AI is pretty useless in jobs that involve
a huge amount of interaction with the physical world, construction, custodial work, manufacturing
work, work that involves home care, hairdressing, the reason being that we are very far behind
in robotics, but also AI models themselves don't have a good conceptual understanding
of spatial causal relations that even if we had fantastically flexible robots that could
cut your hair or hold your hand, AI models would continuously make mistakes about spatial
causal relations and those unreliabilities would end up breaking your neck.
So let's eliminate those jobs.
I've also eliminated, again, based on other people's coding, any jobs that include a high
degree of judgment.
So we wouldn't want AI to run air traffic control.
So Stephanie, think about it yourself.
If the Manchester airport said from now on, we're not going to have any air traffic control
as everything is going to be done by AI, it might hallucinate, it might make some mistakes,
but that's fine.
It's cost savings.
Would you fly to Manchester Airport?
So we don't want that.
So those jobs are out and any job that involves a high degree of social interaction is out
as well.
So that leaves essentially a range of office cognitive jobs.
So that's where the 20% comes from.
Now what about companies?
The companies are, indeed, going after those jobs, they're going after IT jobs, they're
going after back office jobs, but there are several new papers that have come out over
the last few months and they all find the same thing.
The companies are talking a big game about AI, they say, oh, we have a lot of AI being
used, but it has so far zero impact on the companies, zero impact on employment, zero
impact on productivity.
Because it's actually like just other technologies, it spreads slowly.
That's who was the basis of my numbers.
And it's very difficult to integrate AI into what those companies do without a big organizational
change.
When actually push comes to shove, when they try the organizational change, what they're
going to realize is that you cannot really replace IT security people with AI.
You need to use IT security people together with AI and that might actually give us a boost
towards more human complimentary, more pro-worker AI, but we're not there yet because they're
not trying to do that in big scale yet.
Now of course, Cloud Code, that's a big advance.
Will that change things in 2026?
I don't know.
In 2027, I'm sure there will be more companies that have attempted to do things and perhaps
will have a root awakening and this is not going to work in the way that we're trying to
do it.
Perhaps we'll find a new direction, but this is where both policy and public debate are
really important.
To your point, I think Goldman Sachs added up, if you just listen to all the earnings
sort of calls that companies are doing and chief executives are giving, just to your point
about all these companies claiming.
I think the average productivity growth that they're claiming is 32%.
But it's not necessarily evident in any of the numbers.
Let's get on to what policy makers could do about it, because that's something that
governments everywhere are obviously very focused on.
And I noticed that you had recently done this report for Brookings, I think, about a
framework for thinking about pro-worker AI.
What are the main sort of policy areas that you would like people to focus on for that?
First of all, just two points I want to make before I talk about policy.
The first one is just to clarify that by pro-worker AI, I mean exactly the same thing that I was
just talking about.
The second to go, human complementary AI, AI that helps workers do more, helps workers
become more expert at their jobs, perform new tasks, have better information for problem
solving, troubleshooting, judgment, and so on.
That's what we're talking about with pro-worker AI.
And not just for office workers.
We have a lot of examples in the paper showing how manual workers can benefit from AI.
It cannot replace manual workers, but electricians, plumbers, nurses can hugely benefit from
having the right kind of AI assistant, but it has to be the right kind of AI assistant.
It's not going to be chat GPT.
That's the first point.
The second point is that my belief is that as important as policy, actually is what
we're doing right now, Stephanie.
It's the public debate.
Right now we have delegated the future of this very, very important technology.
Some would argue therefore the future of humanity to a handful of people who have no feedback
from society, who have no accountability to society, and right now society is confused.
So on our current path, this handful of people will decide what the future of AI is.
And the best way to counter balance that is to have a vision that's different and hopefully
better for society, and that's what I hope the pro-worker AI vision is.
So the more people talk about that, the more the public pressure will grow, and the more
of an alternative there will be.
Look, my understanding from my limited experience is that anthropic Google, OpenAI, are filled
with people who are very well-meaning.
If they thought that there is a socially beneficial and still technically exciting area of AI,
they would be much more likely to take the plunge in that direction.
It's just that we're not offering them an alternative and society is not pushing back
against Sam Altman and his ilk's vision.
So that's the point.
Policy, in my view, is a supporting set of instruments.
It can remove distortions that exist that solidify the existing system, and it can give
a notch to people as policy has done in the past to try new things.
So on the first bucket, there are many problems in our current system that would make
a redirection of AI in a pro-worker direction more difficult.
I would single out two of them, but there are more.
The first one is that our tax code, and that's true in the UK, that's true in the US, our
tax code encourages firms to replace workers because we tax capital essentially at 0% labor
25 to 30%, especially in the US once you have the healthcare costs and all the payroll
taxes and everything.
So that's a massive subsidy to capital that would make firms adopt automation, even if
automation wasn't better than humans because they're getting the subsidy.
Second, we know from historical evidence and current evidence that new things are done
by new firms.
Competition is really important.
The tech industry has become one of the least competitive industries in history.
And moreover, business models that are new and different are likely to get crushed.
So encouraging more competition via antitrust, by enabling new companies to enter and try
new things, I think that's a very important part of it.
Now there is a lot of energy in Silicon Valley, but it's all these startups that try to do
exactly what OpenAI and Anthropic and Google do so that they can be both by them.
So that's not the kind of competition I'm talking about.
And then in terms of nudging us to do new things, the government is horrible, in my opinion,
at being an entrepreneur.
It cannot be a venture capitalist, it cannot be an entrepreneur, it cannot be an innovator,
but it has great potential to be an aspiring leader.
We have had so many examples where a small amount of money from the government has kick-started
industries, in nanotechnology, in the internet.
In robotics, I was a robotics challenge, a million dollar challenge that really focused
people's attention to get robots that could actually play a game.
So we could do the same with pro-worker technologies.
So we have given several examples of technologies that are very feasible, but are not getting
much investment, a few of them are getting some investment from smaller companies.
We can come up with another 10, 15 examples, and the government could have an easy competition
in these kinds of technologies to focus the mind and show the demonstration effects that
would then say to people, wow, we could do this in other industries and other occupations
as well.
In just thinking about what you've just said and the paper you wrote for Brookings, is
trying to encourage us to think about AI policy in a different way.
So I can't wait to call it, it was the AI Action Plan or something that the Trump administration
brought out last summer, very early on.
And the way that we describe it generally, but particularly when we're talking about China
in the US, AI policy is all about how to get there as fast as possible, how to make
sure, especially in the US, how to make sure we win the race.
And there's quite a lot of focus on sort of privacy and concerns around that and maybe
concerns about the pace of adoption.
And that's obviously the gap that you're trying to fill, but it doesn't feel like there's
much about how to make this work for people.
And I'm sort of struck because we had the Chancellor Rachel Reeves, the UK Finance Minister
on the show, we could do a go.
And I think that's one of the things that she's thinking about is we're not going to
lead the AI race in the UK, but we have said a lot about leading on the adoption.
And I guess the piece of that that you would add is you've got to adopt it in a pro-work
way.
I mean, what does that, what would that look like for the UK?
Let me first say that the paper that you're referring to is actually co-authored with
David Adder and Simon Johnson.
So let me give a shout out to them as well.
Secondly, I think you're absolutely right, while you could give some credit to the Trump
Administration for emphasizing AI, they are really rudderless.
Their only stick here is this has to be an American technology and we have to race and we
have to get rid of all regulations.
That's not a coherent AI policy, but I also fear it's even worse than that.
And it's worse in the following way.
This AGI winner-take-all framing is having truly pernicious effect on U.S.-China relations,
because once you are in this mindset that you are locked into this existential race for
AI supremacy with China, it means that there is no room for collaboration with China on
anything because they are your mortal enemy because if they get to AI supremacy before
you, they're going to destroy you.
That's completely false.
AI models are not going to be at a level that they can just give you global supremacy
by themselves and there are many other things that we can do with AI.
In fact, now coming to the UK, China, Germany are doing more interesting things with AI
than the U.S. in some domains.
Sure, the U.S. has the unrivaled leadership in large language models and foundation models.
But I think the real gains from AI, as I hinted at the beginning of our conversation, will
come from using AI in applications and manufacturing.
Healthcare, I think, is huge, but manufacturing is going to be easier.
And who is leading the efforts to put AI into manufacturing is China, it's Germany, even
though they have no other lemon industry, because they have the manufacturing, know how they
have the data, and they are not beholden to this AGI race, so they're trying to do more
practical things.
I think that's the space in which the UK has to be now.
Unfortunately, the UK doesn't have much manufacturing left.
But I think for the remaining manufacturing and other applications, I think that's where
the UK can have a leadership role, because Germany is so far behind, Germany shouldn't
have a leadership role.
China, of course, is going to have a leadership role, but you can't have a leadership role.
Once they have a broader scoping of what it is that we can do with AI.
And if we actually manage that, it will have beneficial effects for global balances, because
once you get out of this trap of winner-take-all, we cannot collaborate on anything with China,
we have so many global problems, global peace, all the societies that are aging that require
adjustment, climate change, pandemics, there's so much that we actually need to collaborate
with China.
In fact, China makes breakthroughs in applying AI to manufacturing, U.S. should learn from
them.
So there should be information sharing in AI as well.
I'm going to run out of time, but I had a couple of more, and one is following on from
what we were saying about how a country could position itself, that's not trying to be
in this kind of existential race that the U.S. had positioned itself in.
The other conversation you hear in Lost in the UK is, and I mentioned it to the Chancellor
the other day, is that professional services that are successful in the UK, we still have
some advanced manufacturing, but our strongest categories tend to be, along with creative
industry, their professional services, legal services, accounting, finance, all those
things.
They seem to be particularly in the frame when it comes to AI, at least in the discussion.
And there has been, I know, a government concern.
That means if we want them still to be leading sectors, they have to be leading in adoption,
and we have to make sure there are no regulatory or data privacy obstacles in the way of that.
And I guess that raises the question, in the race to adopt, you could actually be making
the institutional setup worse, not just failing to make it better.
Right.
You've given me an opening to talk about another one of my topics, which is data.
So yes, indeed, if your objective was poor as much money into AI as possible and get
rid of all short-term obstacles to AI, you would get rid of privacy and you would allow
AI companies to capture as much data as they want freely.
That would be, and that has been, the worst idea you can imagine.
It's actually worse for the industry.
The future of the industry depends on data.
Data is going to be more important for our future as an economy, as a society, than land.
Can you imagine that if we said any piece of land you want, you can take it.
That would be just chaos, but that's how we treat data.
And that is actually bad for the industry, because it creates a tragedy of the commons,
where everybody's exploiting data and nobody's investing in data, especially if you want
to do useful things with AI, like the pro-work ray I did I was mentioning, you need a lot
of high-quality use cases.
We can do pro-work ray I to help teachers, to help nurses, to help electricians.
How are you going to do that?
Well, you need to train these models on basic knowledge, but you also need to train them
on use cases by the most experienced workers in that field, working with edge cases, difficult
cases, and they're not going to produce that data unless you pay them.
So the current environment where we say privacy doesn't matter, data you should give
as much data to these companies because they're data-hungry, that's actually destroying
the future of the industry, because these models are going to run out of high-quality data.
They're going to be produced, they're going to be trained on low-quality data, and they're
going to be more likely to create AI slope rather than the kind of high-quality, reliable
AI that we need across a range of occupations.
I guess just coming back to sort of where we started in the sense of the perspective
that your 2023 book and one of the features of that, you and Simon's book, was the comparison
with the industrial revolution and making the point that although we tend to say, oh, it
was fine, we ended up with the productivity and it made everyone better off.
You were just pointing to how long the transition lasted, how incredibly costly that was for
people and how much it required active effort to manage it and have a better outcome for
people.
One of the big differences it seems between that industrial revolution and what we may
see now in the next few years with AI is that the workers in the frame, fundamentally,
are white collar workers.
And in fact, Daria Midea has talked about half of entry-level white collar work, it's
quite a sort of safe number because he talks about this and then you could change all
the definitions, but half of the entry-level white collar jobs will be gone in five years.
Does it fundamentally change the challenge for policy makers and even the sort of short-term
macroeconomic impact?
If the main workers affected are also white collar workers, they're possibly some of the
better paid, greatest consuming members of society.
Well, first of all, yes, indeed, there is a lot of uncertainty about what that impact
is going to be, but it's true that it's going to be on white collar workers, more than
manufacturing workers for the reasons that we talked about that these models cannot
do physical work or cannot be combined with physical work yet.
Now white collar workers are college-educated, our leaders are college-educated, so their
plight might have a bigger impact on the political system than the plight of, say, high school
graduates or high school dropout workers did in the United States or the UK in the 1980s,
for example, for that's a possibility.
The second important issue is that the industrial revolution, indeed, and it's very important
because you hear this sort of grozy view of the industrial revolution from Silicon Valley
all the time that everything worked out well, it took about a 100 years of pain and suffering
before things started getting better, but we don't have that kind of time.
Our democracy wouldn't survive, and AI is advancing far too rapidly, so our political
system needs to be much better and much faster at redirecting things and adjusting
to things.
I think those are very important points for us to remember.
But finally, I think it's also very important to recognize that the impact will not stop
with white collar workers, because if college graduates cannot get the jobs that they
want to get, they're going to go and compete for other jobs.
They're not going to stay at home, they'll create downward-wage pressure and job displacement
risk for other people, or they will all be put into sort of gig work, which then creates
all sorts of other problems for the economy and for the labor market, so it's a systemic
problem for the labor market as well.
OK, I'm not sure that that was the most uplifting place to end, but it's been abracing,
but profoundly illuminating to me and clarifying conversation.
Darren Assamoglu, thank you so much.
Thank you, Stephanie, it was great to talk to you.
Thanks for listening to Trumponomics from Bloomberg.
It was hosted by me, Stephanie Flanders.
Trumponomics was produced by Somersade and Moses Andam.
Sound design was by Blake Maples and Aaron Kasper.
To help others find the show, please rate and review it highly wherever you listen.
Trumponomics



