Loading...
Loading...

In this Long Read Sunday episode, a deep exploration of one of the hardest questions around AI: whether human work still matters as machines grow more capable. The episode examines competing visions of an AI-driven future, from worlds where labor disappears to ones where new forms of work, value, and identity emerge. Moving from inequality and capital concentration to the changing nature of expertise, product development, and creativity, the discussion argues that while the future of work is impossible to predict, a shift is already underway toward humans shaping tools, environments, and opportunities in fundamentally new ways.
Sources:
https://philiptrammell.substack.com/p/capital-in-the-22nd-century
https://stratechery.com/2026/ai-and-the-human-condition/
https://newsletter.pragmaticengineer.com/p/when-ai-writes-almost-all-code-what
https://x.com/Saboo_Shubham_/status/2008742211194913117
https://x.com/reidhoffman/status/2008940669239247341
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today we are discussing one of the most unknowable but much thought about questions in and around
AI, which is of course how it will change our jobs and the work that we all do.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, robots and pencils, landfall IP, Zencoder
and super intelligent to get an ad-free version of the show, go to patreon.com-aidlybrief
or you can subscribe to Apple Podcasts.
If you are interested in sponsoring the show, send us a note at sponsors at aidlybrief.ai.
And very briefly before we dive in, I mentioned this a couple of times but I have some big
announcements coming up soon with the AIDB Intelligence product.
If you want to learn more about that, go to aidbintel.com and you can sign up for updates.
Now this is a weekend episode, which means of course a long-read slash big think episode.
And as I mentioned last week, we are still working our way through the spate of big think
essays that ended last year and began this year.
For today's show, we're actually going to string excerpts of about 5 together, with
the first being from Dwarkesh Patel and Philip Tremel called Capital in the 22nd century.
Now this is an extremely long form and dense essay, and there has been a ton of debate
around it.
It's brought up questions of redistribution and wealth policy and tax policy, but that's
sort of not exactly the line that I'm going to thread.
In fact, we're going to focus on the parts that pick up and set the story for this post
from Ben Thompson, a strategic called AI in the human condition.
So let's read the first excerpt from Capital in the 22nd century.
Dwarkesh and Philip Wright, in his 2013 Capital in the 21st century, the socialist economist
Thomas Piketty argued that absent strong redistribution, economic inequality tends to increase
indefinitely through the generations, at least until shocks like large wars or prodigal
sons reset the clock.
This is because the rich tend to save more than the poor and because they can get higher
returns on their investments.
As many noted at the time, this is probably an incorrect account of the past.
Labor and capital complement each other.
Wealthy people can keep accumulating capital, but hammers grow less valuable when there aren't
enough hands to use all of them, and hands grow more valuable when hammers are plentiful.
Capital accumulation thus lowers interest rates, aka income per unit of capital, and raises
wages, income per unit of labor.
This effect is tended to be strong enough that, though inequality may have grown for other
reasons, inequality from capital accumulation alone has been self-correcting.
But in a world of advanced robotics and AI, this correction mechanism will break.
That is, though Piketty was wrong about the past, he will probably be right about the
future.
Indeed, in some ways, he may well be more right than he knew.
A lot of AI wealth is being generated in private markets, which only large and sophisticated
investors have access to.
You can't get direct exposure to XAI from your 401k, but the Sultan of Oman Khan.
This trend towards the privatization of returns already ongoing, and especially pronounced
in the AI startup world could well continue indefinitely.
Furthermore, with full automation, the main source of catch-up growth for developing countries
goes away, namely that by importing capital and know how they rapidly make their underutilized
labor more productive.
If AI is used to lock in a more stable world, or at least one in which ancestors can
more fully control the wealth they leave to their descendants, let alone one in which
they never die, the clock resetting shocks could disappear.
Assuming the rich do not become unprecedentedly philanthropic, a global and highly progressive
tax on capital or at least capital income, will then indeed be essentially the only way
to prevent inequality from growing extreme.
Without one, once AI renders capital a true substitute for labor, approximately everything
will eventually belong to those who are wealthiest when the transition occurs or their errors.
And so here, NLW cutting in again, you can see where this would start to generate lots
of debate.
The argument that they pursue throughout is this idea that if AI proceeds a pace as they
think it might, this global and highly progressive tax will become necessary.
As I said, I'm not going to read all of this piece.
Nor am I going to go rehash all of the arguments, although it is very good fodder for a very
long kind of big thing policy type of discussion, and much of it played out on Twitter slash
X.
If you go check out to our Kesh's account, you can find a lot of that there.
Where I'm interested in picking up the story is with the Ben Thompson's Strategic Reassay,
which he published earlier this week called AI in the Human Condition.
Where Ben starts his piece is with a reflection on his own job, pity the paradox he writes
of the content producer in the age of AI.
On the one hand, AI is one of the greatest gifts ever in terms of topics to cover.
On the other hand, LLM's in particular are quite literally content producers.
What's the point of writing analysis when chat GBT or Gemini or Claude will deliver
analysis on demand about any topic you want?
Is this one of those situations like the early web where the possibility of reaching everyone
seemed like a boon, but was actually a ticking time bomb for the viability of the traditional
publishing model?
Now, where Ben concludes for himself is that he's actually pretty optimistic about his
own prospects.
But where he concludes and where we jump off is this, maybe I'm doomed, but if I'm doomed,
probably everyone else is too, particularly when you think about the very long run.
And that's where Ben picks up the capital in the 22nd century essay.
Now one thing Ben points out is that the conversation is admittedly very far out there
and makes a huge number of assumptions about how things are going to play out.
He says it's an argument of the dorm room discussion variety, and I don't necessarily
think to our Kesher Philip would disagree all that much, but Ben gets crisp about the
assumptions underlying their argument.
He writes, part of the thinking is that once AI can create AI, it can rapidly accelerate
the development of robotics as well, until robots are making robots each generation more
capable than the last, until everything humans do today, both in the digital but also
the physical world can be done better by AI.
This is the world where capital drives all value and labor none.
In stark contrast to the approximately 33% share of GDP that is traditionally gone to
capital, with 66% share of GDP going to labor.
After all, you don't pay robots for marginal labor.
You build them once, check that, they build themselves for materials they harvested, not
just here on Earth but across the galaxy, and do everything at zero marginal cost at
rate at which no human can compete.
From there, however, Ben gets into escepticism.
I get the logic of the argument he writes, but I, perhaps once again, overoptimistically,
am skeptical about this being a problem, particularly one that needs to be addressed
right here right now before the AI takeoff occurs, especially given the acute need for
more capital investment at this moment in time.
The world, Patel and Trammel and Vision sounds like it would be pretty incredible for everyone.
If AI can do everything, then it follows that everyone can have everything, from food
and clothing to every service you can imagine.
Remember, the AI is so good that there are zero jobs for humans, which implies that all
of the jobs can be done by robots for everyone.
Does it matter if you don't personally own the robots if every material desire is already
met?
Second, on the flip side, this world also sounds implausible.
It seems odd that AI would acquire such fantastic capabilities, and yet still be controlled
by humans and governed by property laws as commonly understood in 2025.
Third, it's worth noting that we have seen dramatic shifts in labor in human history, consider
both agricultural revolutions.
In the pre-Neolithic era, 0% of humans worked in agriculture.
Fast forward to 1810 and 81% of the U.S. population worked in agriculture.
Then came the second agriculture revolution, such that 200 years later, only 1% of the
U.S. population works in agriculture.
It's the decline that is particularly interesting to me.
Humans were replaced by machines, even as food became abundant and dramatically cheaper.
No one is measuring their purchases based on how much food cost in 1700, just as they
won't measure their future purchases on the cost of material goods in a pre-robotics
world.
That's because humans didn't sit on their hands.
Rather, entirely new kinds of work were created, which were valued dramatically higher.
Much of this was in factories, and then over the last century there was a rise of office
work.
All of that could very well be replaced by AI, but the point is that the history of humans
is the continual creation of new jobs to be done, jobs that couldn't have been conceived
of before they were obvious, and which paid dramatically more than whatever baseline existed
before technological change.
Like, if I might be cheeky, professional podcaster.
Podcasts didn't even exist 30 years ago, and yet here is Patel, and me, accumulating
capital simply by speaking into a mic and taking advantage of the Internet's zero marginal
cost of distribution.
A concept that itself was unthinkable 50 years ago.
Now, in the next section, Ben argues that much of human consumption experience is not
about choosing the objective best, but about choosing quirky word versions of things.
Podcasts, for example, that say, um, like and sort of.
He also argues that for his money, despite the availability of sex robots, he believes that
humans will still want to have sex with other humans, concluding he writes, I get the
argument that this is the worst that AI will ever be, but it will also never be human,
which is what humans want most of all.
Thompson's last section is called the problem within equality.
He writes, this gets at what I found the most frustrating about Patel and Tramble's point
of view.
The core assumption undergirding their argument was also about the human condition.
It just happened to be negative.
He then references a famous Louis CK appearance from Conan O'Brien, where Louis CK argues
that everything is amazing right now and nobody's happy.
Ben writes, if anything, you can make the case that technological innovations by virtue
of conferring their benefits on everybody has actually had the perverse effect of making
everyone feel worse off.
When I was a child growing up in small town Wisconsin, I had some sort of vague sense that
there were rich people in the world, but from my perspective, taking my first airplane
flight around the age of 10 was a source of wonder, and even provided a sense of status.
After all, many of my friends had never flown at all.
That was the comparison set that mattered to me.
Social media, or more accurately, user-generated content feeds, which are increasingly not social
at all, has completely changed this dynamic.
All I or anyone else needs to do is open Instagram to see beautiful people on private jets
or on beaches or at fancy restaurants, living a life that seems dramatically better than
one's dull existence in the suburbs or a cramped apartment.
Never mind that the means of achieving that insight is a level of technological wealth
that would have been incomprehensible to the richest person in the world 50 years ago.
To put it another way, what Louis CK identified in this clip was the extent to which human happiness
is a relative versus absolute phenomenon.
What we care about is not how much we have, but how we compare.
That by extension is what drives the technological paradox I noted above.
More capabilities more broadly distributed has tremendously enriched the world on an absolute
basis.
The end result, however, has been the dramatic expansion of our comparison set, making
us feel more emisorated than ever.
This writ large is what Patel and Tramel seem to be worried about.
Sure, everyone may have all of their material needs met, but that won't be good enough if
the price of that abundance is the knowledge that someone else has more.
This might not be rational, but it certainly is human.
If you assume that the negative parts of humanity will persist in this world of abundance,
however, then you must leave room for the positive parts as well.
Even if AI does all of the jobs, humans will still want humans, creating an economy for
labor precisely because it is labor.
You can't make the case for the potential that jealousy ought to drive authoritarian
capital controls, while completely dismissing the possibility that the prospect of desirability
gives everyone jobs to do, even if we can't possibly imagine what those jobs might be.
Beyond podcasting, of course.
Today's episode is brought to you by robots and pencils, a company that is growing fast.
Their work as a high growth AWS and Databricks partner means that they're looking for elite
talent ready to create real impact at velocity.
Their teams are made up of AI native engineers, strategists, and designers who love solving
hard problems and pushing how AI shows up in real products.
They move quickly using RoboWorks, their agentic acceleration platform, so teams can deliver
meaningful outcomes in weeks, not months.
They don't build big teams, they build high-impact nimble ones.
The people there are wicked smart with patents, published research, and work that's helped
shape entire categories.
They work in velocity pods and studios that stay focused and move with intent.
If you're ready for career defining work with peers who challenge you and have your back,
robots and pencils is the place.
Explore open roles at robotsandpensals.com slash careers.
That's robotsandpensals.com slash careers.
If you're listening to this, you already know how fast AI is writing the rules for innovation,
disruption, and value creation.
And this new era demands a new kind of patent law firm.
Landfall IP was built from the ground up to operate differently, orchestrating how human
expertise and AI work together for better patents at founder speed.
Created by world-class patent attorneys who saw a better way, Landfall IP lets AI execute
the repeatable while attorneys elevate to create the exceptional.
Landfall isn't adapting to AI, they were built for it.
Have a new idea?
Try the discovery agent for free.
It's a confidential tool that helps innovators synthesize their inventions and instantly
see patentable insight.
Visit landfallip.com to learn more, that's landfallip.com.
If you're using AI to code, ask yourself.
Are you building software or are you just playing prompt roulette?
We know that unstructured prompting works at first, but eventually it leads to AI slop
and technical debt.
Enter Zenflow.
Zenflow takes you from vibe coding to AI first engineering.
It's the first AI orchestration layer that brings discipline to the chaos.
It transforms freeform prompting into spec-driven workflows and multi-agent verification, where
agents actually cross-check each other to prevent drift.
You can even command a fleet of parallel agents to implement features and fix bugs simultaneously.
You've seen teams accelerate delivery to x to 10x.
Stop gambling with prompts.
Start orchestrating your AI.
Turn raw speed into reliable production-grade output at Zenflow.free.
Today's episode is brought to you by Superintelligent.
Superintelligent is a platform that very simply put is all about helping your company figure
out how to use AI better.
We deploy voice agents to interview people across your company, combine that with proprietary
intelligence about what's working for other companies, and give you a set of recommendations
around use cases, change management initiatives that add up to an AI roadmap that can help
you get value out of AI for your company.
But now we want to empower the folks inside your team who are responsible for that transformation
with an even more direct platform.
Our forthcoming AI strategy compass tool is ready to start to be tested.
This is a power tool for anyone who is responsible for AI adoption or AI transformation inside
their companies.
It's going to allow you to do a lot of the things that we do at Superintelligent, but in a
much more automated, self-managed way, and with a totally different cost structure.
If you are interested in checking it out, go to AIdailybrief.ai slash compass, fill out
the form, and we will be in touch soon.
So the point here of Ben's piece in a lot of ways is just, we literally can't know what's
coming next, and what's so challenging about the moment that we're living through when
it comes to our jobs in our work is that it's easier in many ways to see how we have less
things to do than to catch the glimpses of the new things that we might spend our time
on in the future.
There is of course no place that this conversation is happening more saliently and more in public
than around software engineering.
We talked about this a couple different times this week in different contexts, and recently
Devs have been writing a lot about this.
Gurgle and Rose wrote a piece in his pragmatic engineer newsletter called when AI writes
almost all code what happens to software engineering.
In it, he identifies the same magical feeling that so many other Devs had while they were
doing their side projects and personal projects over the holiday using the newest models
opus 4.5 and GPT 5.2 through interfaces like cloud code.
And what he spends the rest of the essay exploring is what's good, what's bad, and what's just
changing.
The bad he sums up, for example, as the declining value of expertise, prototyping he writes,
being a language polyglot or a specialist in a stack are likely to be a lot less valuable.
The good he writes though is that software engineers are more valuable than before.
He traits are in more demand, being more product-minded to be a baseline at startups, and being
a solid software engineer and not just a coder will be more sought after than before.
The ugly he writes is uncomfortable outcomes.
More code generated will lead to more problems, weak software engineering practices start to
hurt sooner, and perhaps a tougher work-life balance for Devs.
And certain roles he estimates are going to change in pretty fundamental ways.
The one that he's interested in is product management versus software engineering.
Product managers can now generate software easier he writes, needing fewer engineers to
realize their goals, but software engineers also need less product management.
Both professions are set to overlap with one another more than before.
Now what's interesting and valuable is not just that people are starting to have the
conversation about how these roles will change, they're starting to try to lean into it
and create new blueprints that people can actually put into practice.
Google senior AI product manager Shobam Sabu wrote a piece on X called the Modern AI PM
in the Age of Agents.
It is an exploration of exactly this, how the role of product manager is changing.
And I think as you listen to parts of this, you'll find that there is probably a lot
here that's not just relevant for product managers.
He writes, the job of a PM used to be translation.
You talk to customers, synthesize their problems, wrote specs, and handed them to engineers.
You were the bridge between what people need and what gets built.
The value was in that translation layer.
That layer is compressing.
When agents can take a well-formed problem and produce working code, the PM's job shifts.
You're no longer translating for engineers.
They're forming intent clearly enough that agents can act on it directly.
The spec is becoming the product.
He continues,
I've watched this happen with myself and dozens of other PM's.
Previously, a PM would write a detailed spec, hand it off, wait for questions, clarify,
wait for implementation, review, give feedback, iterate.
The cycle took weeks.
Now they write a clear problem statement with constraints, point an agent at it, and
review working code in an hour.
The time between, I don't know what we should build and hear it is, collapsed.
To work of knowing what to build didn't get easier, it got more important.
You don't need to write the code yourself.
You need to know what you want clearly enough that an agent can build it.
The spec and the prototype are becoming the same thing.
You just describe what you want, watch it take shape, course correct, and iterate.
The bottleneck is an implementation anymore, and the speed of shipping is only accelerating.
I've been at Google for around three to four months now, and it feels like we've shipped
years worth of AI progress.
Every big and small AI company is shipping at this pace thanks to AI coding agents.
The cycle times that used to define product development from quarterly planning, monthly
sprints to weekly releases are compressing into something closer to continuous deployment
of ideas.
When the implementation barrier drops this fast, the bottleneck shifts upstream.
The scarce resource is an engineering capacity, it's knowing what's actually worth building.
That leads to them to write about a new PM skill set.
It's things like problem shaping, context curation, and not just for technical problems,
but for taste.
The mental model shift he says is from hands-off to hands-on.
The AI PM he writes isn't just handing off requirements anymore, they're vibing the
first iteration themselves and getting real feedback on working software, not slide decks
or figma mocks, engineers then become collaborators on making the product better and production
ready rather than translators of your intent.
This changes your relationship with the product, you're not describing what you want and hoping
it comes back right, you're shaping it directly in real time.
Now I would argue that almost everyone is going to at least a little bit more than they
are now, be a product manager in the way that Shubham is talking about in this post.
That doesn't necessarily mean that all of us will contribute to the air quotes product
that is the output of our company, but all of us will, again, at least a little bit
more than we do now, have a more product management type of mindset about our problems and solutions.
We will start to look for ways in which we can build things, intermediate things, one
time things, discardable things, ephemeral things that can solve our problems or open up
new opportunities.
This mindset shift won't come overnight and it won't even necessarily feel like it's
work.
In the last short essay that we'll reference in this episode, LinkedIn founder Reed Hoffman
wrote a reflection of a recent conversation he had with Replic CEO Amjad Masad.
He writes,
We're all becoming gamers.
We're quickly moving towards a world where with AI, we'll all be able to craft tools
to help us better play the game of life.
For those who grew up playing video games, you understand what I mean.
It should help you turn ideas into real things, instantly get unstuck on hard problems, and
operate beyond what one person could normally do alone.
No where is this more true than an AI development platform like Replic.
At scale, these platforms will make life start to feel like you're progressing through a game.
Each new challenge is a level and AI is how you craft a way forward.
For centuries, humans have built tools to get ahead, sometimes individually, sometimes
together.
But as economies matured, most of us stopped building tools and started relying on the ones
already available to work faster, live better, and scale what we were doing.
Software took this trend to its extreme.
Most people don't use software that's designed for them.
They use general-purpose tools built for the median user, tools that improve generic workflows,
but rarely map cleanly onto the specific problems anyone person is actually trying to solve.
That trade-off made sense, as generalized software could scale to help more people and
generate more revenue.
For the user though, it created a paradigm where a specific tool to solve a specific problem
was hard to find.
So you either had to patch a bunch of consumer software together, annoying, learn to code,
time-consuming, or could convince someone else to do it for you, often expensive.
With Replic, that paradigm has been shattered.
Now building software is easy and it almost feels like you're playing a game, trying
to craft the perfect tool to beat the level that's been stumping you for weeks.
A useful analogy here is Minecraft.
Minecraft doesn't give you a finished solution or a prescribed path.
It gives you a world set of primitives and fast feedback.
If you need a tool, you build it.
If the tool isn't right, you can try another way.
You don't wait for a perfect object to exist.
You craft what you need from what's available.
Replic increasingly feels like that kind of environment for software.
Read concludes,
In a few years we'll shift from thinking, what can I buy to help me to, what can I build
to help me?
Work and life will feel like progressing through levels where each new challenge is meant
not by waiting for the right software to exist but by creating it.
The real change isn't that everyone becomes a programmer.
It's that everyone gains the ability to shape their environment, extend their capabilities,
and move forward under their own control.
The real change is that everyone becomes a gamer, building for the most important game
they'll play.
I don't know ultimately what the future of AI is.
I don't know how it's ultimately going to change software engineering jobs to say
nothing of the rest of knowledge work.
But what I know is that sitting here at the beginning of 2026, there has been a shift
that many of us have felt, where the possibility of building tools that let us navigate the world
in a personal and work context has actually started to take root as a default behavior
and as that happens, we're starting to get glimpses of what the next generation of all
of our jobs might be.
Now it is going to take a lot of stumbling around in the dark for it all to come together
and frankly that's the exciting thing about this moment.
For now though, I think we will close there that is certainly enough to chew on for one
Sunday.
Appreciate you guys listening or watching as always and until next time, peace.

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis