Loading...
Loading...

This week, the global AI conversation hit a new level. From investor memos to viral economic doomsday scenarios, the debate is shifting from “Is AI real?” to “What happens if it actually works?” In this episode, we break down the “2028 Global Intelligence Crisis” thesis, the market’s dramatic reaction, and the growing divide between AI-as-doom-loop and AI-as-productivity-explosion narratives. We explore whether efficiency is destiny, why human preferences may matter more than we think, and what a “Schrödinger’s Apocalypse” really means for the economy.
Sources:
https://x.com/Citrini7/status/2025668400396349476
Want to build with OpenClaw?
LEARN MORE ABOUT CLAW CAMP: https://campclaw.ai/
Or for enterprises, check out: https://enterpriseclaw.ai/
Brought to you by:
KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG’s new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/Navigate
AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - https://www.aiuc-1.com/
Mercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-banking
Rackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpad
Blitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/
Optimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Our Newsletter is BACK: https://aidailybrief.beehiiv.com/
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, the week the global AI conversation hit a whole new level.
The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, Assembly, Robots and Pencils, AIUC and Blitzy.
To get an ad-free version of the show, go to patreon.com, slash AI Daily Brief, or you can subscribe on Apple Podcasts.
To learn about sponsoring the show, send us a note at sponsors at AIDailyBrief.ai.
While you're on AIDailyBrief.ai, you can subscribe to our newsletter, which is newly restarted,
and which is going to have all the links to all the articles and posts that I referenced in the show.
And you can also learn about all our various other ecosystem initiatives like ClawCamp or Enterprise Claw,
Registration for which is open until the end of next week.
The last couple of months have seen a steady growing acknowledgement of just how significant the disruption of AI is.
This shift came first to those who are actually in the industry.
Just last week, OpenAI Founder Andre Carpathy wrote,
it's hard to communicate how much programming has changed due to AI in the last two months.
Not gradually and over time in the progress as usual away, but specifically this last December.
There are a number of asterisks, but in my opinion, coding agents basically didn't work before December and basically work since.
The models have significantly higher quality, long-term coherence and tenacity, and they can power through large and long tasks.
Well-passed enough that it is extremely disruptive to the default programming workflow.
As a result, programming is becoming unrecognizable.
You're not typing computer code into an editor like the way things were since computers were invented.
That era is over.
You're spinning up AI agents, giving them tasks in English, and managing and reviewing their work in parallel.
The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator
clause with all of the right tools, memory, and instructions that productively manage multiple parallel code instances for you.
The leverage achievable via top-tier agentic engineering feels very high right now.
In my opinion, this is nowhere near business's usual time in software.
Now of course, this is not limited to software.
On the earnings call where he explained the 40% reduction in the block team,
Jack Dorsey specifically noted, the leap that AI had made around the same December timeline.
And indeed Wall Street is one of the main places where the recognition of the phase shift in AI is fully coming home to roost.
Michael Gaed of the League Lag Report wrote this week,
I once tweeted AI as BS, have been playing around with complexity computer to automate workflows.
It's not BS. It's going to fundamentally alter the world. I believe it now.
By the end of the year, I believe we will see huge layoffs.
Block is a sign of what's to come.
This humble shift, a throwing up of the hands and saying I was wrong to doubt this,
maybe got its best expression this week, in a memo from legendary oak tree investor Howard Marx.
The memo is called AI hurdles ahead.
In it Marx writes,
My main reason for writing this adendum is to address significant changes
that have taken place in AI over the three months since I published Is It a Bubble.
First he said, there's the pace at which developments in AI are occurring.
That speed is unlike anything we've seen before now and this has implications that have never existed.
AI is growing at speeds that greatly outpaced the technological innovations of the past.
Nothing has ever taken hold of the pace AI has.
It's able to change the world at a speed that approaches instantaneous,
outpacing the ability of most observers to anticipate or even comprehend.
The second important thing that's happened has been an incredible leap ahead in AI's capabilities.
Level one is chat AI, level two is tool using AI, level three is autonomous agent.
At this level the user doesn't tell AI what to do.
The user gives it a goal as well as the parameters of the desired output.
The agent does the work checks it and submits a finished product.
This is labor replacement at the task level, not assistance replacement.
Marx continues, the most significant thing that distinguishes AI is something we've never dealt with in connection with prior technological developments.
AI's ability to act autonomously.
The bottom line Marx concludes is that AI is very real,
capable of doing a lot of work that here two four has been done by knowledge workers
and growing extremely rapidly in terms of applications.
What we see today is only the beginning.
As I mentioned above, if I had to guess, I'd say its potential is more likely underestimated today than overestimated.
He does point out that it's not clear that the market is pricing that disruption the right way,
but that the change is undeniable.
And yet still, when we look back at the history of this particular week in time,
this will be the Satrini Report Week.
The piece by Satrini's Alep Shah is called the 2028 Global Intelligence Crisis
and walks through a Doomsday scenario where effectively AI is so good that it's actually bearish,
creating a Dooms spiral where AI does everything, allowing companies to cut human workers,
which reduce his spending, which reduces available capital from consumers,
which forces companies to lay off more, and so on and so forth.
The note, while admittedly an artifact of speculative exploration,
hit with the force of a neutron bomb in a Wall Street environment that finds itself extremely destabilized
and unclear what to make of AI change.
Is it an infrastructure bubble? Is it the Saspocalypse where AI does everything?
Can it be both at the same time?
Despite as Deutsche Bank strategist Jim Reid put it,
the report having a high vibes to substance ratio, it was extremely resonant.
So much so that much of the rest of the week has been responses and rejoinders.
Economics opinion writer Noah Smith wrote a response called the Satrini Post is just a scary bedtime story.
He summed it up, AI might take your job, but it probably won't crash the economy,
and if it does, we know how to deal with it.
Noah writes, if you don't like posts about AI, I have some bad news.
For the next few years, there are probably going to be a lot of them.
It's not often one gets to live through an industrial revolution in real time,
especially one that moves so quickly.
There will be very few pieces of the economy, if any, that this revolution doesn't touch,
and it will have major implications for other things I write about, like geopolitics, society, etc.
AI is not going to be a special, compartmentalized topic for a long time.
It's going to be central to a lot of what's going on.
If you find that boring, well, all I can say is we don't get to choose the times we live in.
Every couple of weeks, someone comes out with a big post about how AI is changing everything,
and that post goes viral and everyone talks about it for a few days.
A couple of weeks ago, it was Matt Schumer, something big is happening.
This week, it gets to training researches the 2028 Global Intelligence Crisis,
and yes, the title is in all caps.
The post paints a picture of a future in which AI disrupts lots of different kinds of white collar work
and service industry business models and industries like software, finance, business services, and so on,
and in which this disruption causes an economic crisis.
Noah continues that this is really two theses in one.
A microeconomic thesis about which industries and jobs AI will disrupt,
and a macroeconomic thesis about what this will do to the economy overall.
Now, I'll pause the reading there, but Noah goes on to basically make the point that, among other things,
the Citrini Post operates from the implicit idea that there will be no policy response,
a fairly confusing view given the magnitude of the disruption they're articulating.
The Kobai Easy Letter also took on the Citrini Post.
Their response essay was called What if AI doesn't actually end the world?
In it they write, what's obviously true, AI is not another software feature or efficiency gain.
It's a general purpose capability shock that touches every white collar work flow simultaneously.
Unlike any revolution in history, AI is getting better at everything simultaneously.
But what if the Doomsday scenario is false?
It assumes demand is fixed that productivity gains don't expand markets,
and that the system cannot adapt faster than the disruption.
We believe they continue.
There is a second path that is being dramatically underpriced.
The same anthropic takedowns that look like early signs of systemic collapse
may ultimately be the start of the largest productivity expansion ever.
While our analysis is not a certain outcome,
it is important to remember that humanity has always prevailed,
and the free market always works itself out.
A couple of the key pieces of the argument from the Kobai Easy Letter,
one is something that I've talked about frequently on this show,
that the Doom loop or any long term job loss scenario assumes that demand is fixed.
The bearish loop they write creates a simplified linear model.
AI gets better, businesses reduce headcount and wages, buying drops,
businesses invest in AI again to defend their margins and the downward cycle repeats.
This assumes they write a completely stagnant economy.
History suggests otherwise.
When the cost of producing something collapses, demand really stays flat.
It expands.
When compute costs fell, we did not consume the same amount of compute more cheaply.
We consumed orders of magnitude more of it and built entirely new industries on top.
AI decreases costs in every sector,
and when service costs go down, purchasing power increases with or without wage growth.
The Doom loop becomes dominant only if AI replaces labor without materially expanding demand.
The optimistic scenario emerges if cheaper compute and productivity yields
entirely new categories of consumption and economic activity.
The way that I put this in the past is that if the cost to produce code is one-one-hundredth of what it used to be,
we don't get one-one-hundredth of the coders, we get a hundred times more code.
The Kobai Easy Letter also argues that labor markets don't vanish but restructure.
They write,
a key concern is that AI disproportionately affects white collar employment,
which drives discretionary consumption and housing demand.
This is true and a legitimate concern, particularly as the wealth divide is already so massive.
However, AI struggles with physical world dexterity and human identity.
Skilled trades, hands-on health care, advanced manufacturing,
and experienced driven industries retain structural demand.
In many cases, AI compliments these roles rather than replaces them.
More importantly, AI lowers the barrier to entrepreneurship.
When one individual can automate accounting, marketing, support, and coding tasks,
small-scale business formation becomes easier.
We are bullish on small businesses.
In fact, the removal of barriers to entry through AI may be the solution to flatten the wealth divide that we currently face.
The internet killed certain job categories but created entirely new ones.
AI may follow a similar story, compressing some white collar functions
while expanding self-directed economic participation elsewhere.
In their conclusion, they write,
AI amplifies outcomes, it can amplify fragility if institutions fail to adapt,
and it can also amplify prosperity if productivity outpaces disruption.
The anthropic takedowns are signals that workflows are being reprised
and cognitive labor is becoming cheaper, a clear transition,
but transition is not the same as collapse.
As every other major technological revolution has looked disabilizing at the start,
the most underpriced possibility today is not dystopia, it's abundance.
AI may compress rents, reduce friction, and restructure labor markets,
but it may also deliver the largest real productivity expansion in modern history.
And it wasn't just internet newsletters that were publishing rebuttals,
no less than Citadel Securities got in on the game.
Their piece, which they called the 2026 Global Intelligence Crisis,
and they pointed out that much of the evidence just points in a different direction.
Easily the most referenced part of the Citadel rejoinder
is the chart of indeed job postings for software engineers
that shows them going up dramatically over the last few months.
They also point out that maybe the biggest X factor in all of this is AI diffusion speed.
Not how much of the white collar work AI could do right now theoretically,
but at what speed will enterprise actually allow it to do that work?
Citadel writes, the first order presentation of AI adoption is generally a binary question.
Do you use AI?
The more important question in so far as it relates to the AI displacement narrative is,
how intensely is AI being used for work?
Looking at St. Louis Fed data, they say,
the data presents little evidence of any imminent displacement risk.
Recursive technology they point out is not recursive adoption
and point out that the risk of displacement declines with a slower pace of adoption.
Finally, calling upon the example of history, they write,
in 1930, John Miner Keane's wrote,
economic possibilities for our grandchildren,
predicting that productivity growth would be so powerful that by the early 21st century,
the work week would fall to 15 hours.
He was directionally correct about productivity growth,
but profoundly wrong about labor market implications.
Rather than working dramatically less,
society's consumed dramatically more.
Why?
Because rising productivity lowered costs and expanded the consumption frontier,
preferences shifted towards higher quality goods, new services,
and previously unimaginable forms of expenditure.
Leisure increased modestly, but material aspiration expanded far more.
History suggests productivity gains to not automatically translate into labor withdrawal
or demand collapse, as they alter the composition of demand,
expand real incomes, and generate new industries.
Keane's underestimated the elasticity of human wants.
You've heard me talk about Assembly AI and their insanely accurate voice AI models,
but they just ship something big.
Universal 3 Pro is a first of its kind class of speech language model
that lets you prompt speech recognition with your own domain context and vocabulary,
instead of fixing transcripts and post-processing.
It's more flexible than traditional ASR and more deterministic than LLMs,
so you get accurate output at the source,
and can capture the emotion behind human speech that transcripts often miss,
all without custom models or post-processing hacks.
And to celebrate the launch, they're making it free to try for all of February.
If you're building anything with voice, this one's worth a look.
Head to assemblyai.com slash free offer to check it out.
Today's episode is brought to you by robots and pencils,
a company that is growing fast.
Their work as a high-growth AWS and Databricks partner
means that they're looking for elite talent ready to create real impact at velocity.
Their teams are made up of AI native engineers, strategists, and designers
who love solving hard problems, and pushing how AI shows up in real products.
They move quickly using robot works, their agentic acceleration platform,
so teams can deliver meaningful outcomes in weeks, not months.
They don't build big teams, they build high-impact nimble ones.
The people there are wicked smart with patents, published research,
and work that's helped shape entire categories.
They work in velocity pods and studios that stay focused and move with intent.
If you're ready for career-defining work with peers who challenge you and have your back,
robots and pencils is the place.
Explore open roles at robotsandpensals.com slash careers.
That's robotsandpensals.com slash careers.
There's a new standard that I think is going to matter a lot for the enterprise AI agent space.
It's called AIUC1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy,
security, safety, reliability, accountability, and societal impact,
all verified by a trusted third party.
One of the reasons it's on my radar is that 11 Labs, who you've heard me talk about before
and is just an absolute juggernaut right now, just became the first voice agent
to be certified against AIUC1 and is launching a first of its kind insurable AI agent.
What that means in practice is real-time guardrails that block unsafe responses
and protect against manipulation plus a full safety stack.
This is the kind of thing that unlocks enterprise adoption.
When a company building on 11 Labs can point to a third party certification
and say our agents are secure, safe and verified, that changes the conversation.
Go to AIUC.com to learn about the world's first standard for AI agents.
That's AIUC.com.
Weekends are for vibe coding.
It has never been easier to bring a passion project to life, so go ahead
and fire up your favorite vibe coding tool.
But Monday is coming, and before you know it, you'll be staring down a maze of microservices,
a legacy cobalt system from the 1970s, and an engineering roadmap that will exist
well past your retirement party.
That's why you need Blitzie, the first autonomous software development platform
designed for enterprise scale code bases.
Deploy the beginning of every sprint and tackle your roadmap 500% faster.
Blitzie's agents suggest your entire code base plan the work
and deliver over 80% autonomously, validated end-to-end tested premium quality code
at the speed of compute, months of engineering compressed into days.
Vive code your passion projects on the weekend, bring Blitzie to work on Monday.
CY Fortune 500's trust Blitzie for the code that matters at Blitzie.com.
That's B-L-I-T-Z-Y.com.
And this idea of human wants, both in terms of their ability to expand,
but also just in terms of their manifestation and reality in theory,
was the subject of my ponderance, written in the midst of a 20-hour forced layover
in the Amazonian rainforest that I called were all missing the most important market
force that will shape AI, or my plane made an emergency landing in the Amazon
and all I got was this lesson about the future of the world.
The Peace Reads.
It's a weird week, man.
A bomb cyclone blizzard with the force of a category two, nearly three hurricane,
shut down New York and the rest of the East Coast.
This was problematic for lots of reasons, not least of which was that it completely
torpedoed our family's return from Uruguay to the Hudson Valley.
Meanwhile, back home, the latest AI Doomer sci-fi,
I say that with a lot less division than it probably sounds,
struck a nerve deep enough to rip the throats out of IBM, VISA,
and many others just because of what AI might do.
As I sit here and menouse Brazil, I find myself contemplating
how my family's experience over the last 24 hours or so demonstrates
just how wrong I think we are about how AI ends up playing out in the economy.
I'll give you a moment to finish ralphing at the utter-linked inness of that
statement, and then let me explain why we're all missing the most important
market force that will shape AI.
When we got the notification that we were making an emergency landing in
menouse, the trip had already been another calamity.
A few days earlier in the middle of the night, we got the text notification
from Delta that, almost assuredly, our upcoming trip from Montevideo to JFK
was going to get 86 by the impending snowstorm.
The options for rescheduling weren't great.
It was basically stick around your way till Friday,
when we were supposed to have gotten home Monday morning,
or scurry on Monday to do a new multi-leg trip through São Paulo and Atlanta.
We figured that even if things were still gnarly in New York on the back end,
solving that from Georgia was easier than solving that from São Paulo.
Dutifully, we drove the two hours from Jose Ignacio to the Montevideo airport,
returned our tiny VW rental and let the kids scarf some Mickey D's
before the 27 hours of upcoming travel.
In retrospect, it was the last peaceful moments of optimism we'd have for some time.
We got to the check-in line and instantly it was clear that something was wrong.
I can't check you in, said the attendant.
Wait, what? Why?
We can't check in anyone whose final destination is New York,
but the storm is over.
We're not even getting there until tomorrow when it will be even more over.
And we've got stops in São Paulo and Atlanta.
Let us get stuck there.
I can't. It's our policy.
And after a call to her supervisor,
it remained their policy.
We didn't have a lot of great options.
Turn around and hang out for another five days.
Or call Delta, have them delete the final Atlanta JFK leg
so we could at least get to the US.
Atlanta, it was and after some frantic searching,
we booked what seemed like the last rental car in America to do the 15-hour drive home
from Jackson Hartfield.
Fast forward about 10 hours.
We've made it through the first leg of the flight,
a couple hours in the actually kind of excellent GRU airport in Brazil,
and all of us, including four-year-old Gus and seven-year-old Alden,
are passed out dreaming of a next day full of Kia soles and a dozen Wawa
and Red Bull stops.
That is until at 4 a.m., the captain gets over the loudspeaker
and says that, sorry, a generator has stopped working
and we have to make an emergency diversion into Manaus.
That's the capital of the Amazon for those keeping track at home.
We hadn't even made it out of Brazil.
So much for at least if we get stuck, it will be in the US.
Airports are stressful at the best of times.
300 people dropped out of the sky into a place many of them had never heard of
and at the mercy of the gods of airplane mechanics,
hotel availability, and Brazilian customs authorities
and you've got something else entirely.
But this is supposed to be the setup to a story about AI, right?
We're now sitting here at the lovely hotel villa Amazonia in the old part of Manaus,
waiting for a room to be ready so we can catch a few wings before trudging back to catch another plane,
hopefully a new one to be honest, that will somehow, some way, get us back to the US of A.
It is absolutely undeniable how much AI has made this experience better.
I've used LLMs to translate back and forth in a language I barely know how to say thank you in,
research the safety profile of different areas.
It's not a war zone, but it's not low risk either.
Gee, thanks, ChatchyBT, real reassuring.
And I've also used LLMs to hunt for rental cars, plan driving routes, and of course,
reassure myself that Airbus 330s really can fly with just one generator
in those 10s, 45 minutes between when we got the announcement and when we touched down.
And yet, as awesome as AI has been, every part of the story has really been about human interaction
and human discretion, either that went for us or against us.
The attendant at MVD and her supervisor who didn't buck a clearly stupid policy
that might have made sense 24 hours earlier but certainly didn't anymore.
The customer service reps of the Delta Diamond medallion status line,
who range from wildly unhelpful on the one end of the spectrum to hustling to find us a flight to Philly on the other.
The hotel staffers, who overlooked that we hadn't technically booked our kids on the reservation
and who hustled to get us in a room before the 3pm check-in time.
AI has been extremely helpful during this trip,
but at no point would I have rather interacted with AI than these humans.
That's not just because I prefer human interaction out of some historic sense of legacy
of the way things have always been done.
In fact, I'd venture to say that I'm exactly the type of person who, in many situations,
would wildly rather interact with an anonymous robot.
The reason that I preferred human interaction is the possibility of exception.
Human systems are built with an implicit assumption of discretionary non-compliance.
Rules tend to be written much tighter than anyone expects them to be followed.
Everyone knows this. The rule writers know it, the managers know it.
The humans interacting with the system as customers, human judgment is the shock absorber
between the world the policy was designed for and the messy reality.
To be clear, this is a feature, not a bug.
The whole system would be significantly more brittle if everyone just followed the rules perfectly.
You can probably see where I'm going with this.
A world where AI agents perfectly follow the policy all the time would be,
in many, many real world contexts, much worse than the one where humans follow it only imperfectly.
Call it the paradox of perfect compliance.
But couldn't AI have grace and flexibility programmed it as well?
Sure, and as we design agent-led systems, it will probably be important to remember
that in people's real lived experience, exceptions are as important as rules.
But kindness as governance, an unspoken and yet nearly universal aspect
of well-functioning human systems, is hard to program.
Small acts of bureaucratic rebellion tend not to be the byproduct of clear rational calculations.
Instead, they are felt decisions.
They are a split-second judgment call that comes on the heels of the utterly relatable exhale
of an exhausted parent at their wits end, just trying to keep it together for their even more exhausted kids.
There's something in the pleas of the person being helped that suggests that as bad as this situation is,
there's something else they're going through that's even harder.
Which brings us to this weekend's market freak out caused a joy.
The latest AI-dumer fanfic slash Thought Exercise is a fictional dispatch from 2028
describing an AI-driven economic crisis.
This one isn't about the fallout of an AI bubble popping because of a performance plateau.
Instead, it's a meditation on what happens if AI actually gets as good as we think it will.
Basically, so bullish it's bearish.
The piece is from well-respected market research firms to Trini and is well-constructed and worth reading.
And boy did people read it.
Nine million on the ex-post alone, Bloomberg, The Wall Street Journal,
and many more wrote articles about the piece as the latest leg of the SaaS apocalypse
cleaved billions off of tech and finance stocks.
In other words, markets actually moved on a literal work of fiction.
There is a ton of great debate to be had around the piece,
which is of course happening right now, and which makes the outcome of them having shared it
likely better in the medium and long run than if they hadn't shared it,
even if DoorDash stockholders don't really agree right now.
I'm not really interested in a point-by-point rebuttal.
What I do want to point out is that like most analysis on both the bear and bull side,
it rests on an assumption, so deeply embedded that almost no one questions it.
That because markets reward efficiency, efficiency is inevitable.
This efficiency gospel isn't exactly wrong, but it mistakes means for ends.
And here is my main point.
Markets don't exist to be efficient.
Markets exist to serve human preferences.
Outside of the efficiency gospel, the value of efficiency is primarily
in how it improves a company's ability to serve human wants and needs,
not an ends in and of itself.
Confusing the two is like saying the point of a restaurant is great ingredients and a clean kitchen.
Too much of the AI discourse on both bear and bull sides makes exactly this mistake.
We've thought a lot about how much more efficient AI will make things,
but too little about what we and other humans of the future are going to want.
AI might make every part of a company's operations more efficient,
but will that company's customers actually want to interact with the new,
more efficient version on the other side?
What are the chances that they actually reject it in favor of a more human version?
Will they actually be willing to pay a premium for a less rawly efficient experience
because they like that version of the experience better?
Markets make this confusing.
Investors are the high priests of the efficiency gospel.
And the day-to-day excitement of market moves tends to lead more media attention
to be focused on the stock story than the value created to the end consumer.
Indeed, for the market priests and priestesses, the value to the end consumer is
actually secondary to the value to the shareholder.
But that only lasts for so long.
A company can live for a long time because the market's like it, but not forever.
Ultimately, the buck stops with the customer.
And when it comes to the customer, human institutions are not outcome generating machines,
at least not exclusively.
In many cases, they're also or even primarily agency validating systems.
There's plenty of evidence to suggest that people are willing to pay for the possibility
of being an exception.
The chance that someone will look at your situation and deviate from the script,
the knowledge that the person across the counter could break the rule for you,
they don't.
Friction isn't always waste.
Think about all the ways capitalism has invented for us to transform the possibility of exception
into the exception is the norm.
Loyalty programs, status tiers, premium service.
The entire premium loyalty economy is a multi-billion dollar bet that people will pay
for guaranteed access to generally favorable human discretion.
A couple of years ago, I decided I was being stupid not to concentrate
airline loyalty on a single airline and so picked Delta.
We spend a fair bit of scratch on a Delta SkyMiles card, so I got to Diamond last year.
Turns out there's a special 24-hour line just for Diamond members.
And man have I put that thing to the test the last few days.
Even as Reddit rages at two, three, four, even five hour wait times with Delta
and the wake of the blizzard, I've been able to get a real live human being on the phone
in under a minute a half dozen times.
The point is Delta isn't trying to automate the Diamond line.
The Diamond line is the product.
Automate that and you've eliminated the thing people are paying for.
I'm not trying to be polyannish about the magnitude of AI disruption.
Anyone who listens to the podcast knows how enormous a change I think we're living through
and how profoundly challenging this next middle part could be.
Even though I'm optimistic for the long term.
But a big strand of the most urgent concerns are predicated not just on the scale of disruption but the speed.
This type of humorism rejects comparisons to the past because the paradigm shift was more gradual
while this one is happening everywhere all at once.
The core question these arguments tend not to grapple with is
just because AI could do something, will it always be called to do so?
If you live in the efficiency gospel, the answer is, of course, yes.
If a non-human intelligence can perform the same task more efficiently,
it will inevitably be tapped to do that task at the expense of the human who used to do it.
But efficiency is not destiny.
Indeed, efficiency is only one type of market force.
Humans have agency. Humans have purchasing power.
Even in the Satrini report, the white collar labor folks aren't out of consumer power yet.
If human desire runs counter to efficiency, as it often does,
there's every reason to think that the old maxim that the customer is always right
will provide a serious counterweight to the unstoppable market advance of the machines.
Safetyists have long advocated some type of pause to allow us more time to adapt.
I think we might be underestimating the extent to which human consumer preferences will do that all on their own.
It's entirely possible I'm wrong, and the forces of the efficiency gospel are too strong to resist.
But I'm on hour 30, or 40, or 50, or who knows by the time you're reading this?
Stranded in who knows where Brazil with two small kids?
AI as information guide has been amazing,
but exactly zero times have I wished I could have a more efficient AI to interact with.
What I've wanted was a human being who looked at our situation
and decided to break the rules just a little to help us get home.
Efficiency is not destiny.
And ultimately, and now I'm done reading myself and back to just talking as myself,
the thing to note here is just that as compelling as all of these arguments sound,
as many holes as there are in one, as many better points in another,
the reality is that we are all just grasping and guessing at a future that we cannot know.
Abundance author Derek Thompson writes,
the level of uncertainty is so high and the quality and supply of real world real-time information
about AI's macroeconomic effects so paltry that very serious conversations about AI
are often more literary than genuinely analytical.
I feel lucky to have been able to have conversations about the frontier of AI
with executives and builders at Frontier Labs, economists at AI conferences,
investors in AI and other AI folks at off-the-record dinners,
where important truths can theoretically be shared without risk.
I can't emphasize enough that nobody knows anything is about as close to the reality here
as three words are going to get you.
Nobody knows what's going to happen this year, or next year, or the year after that.
There is no secret cigar-filled room of people who have unique access
to some authentic postcard from the future.
When you drill down underneath the bluster, the boomer-ism, the fear, the anxiety,
what's there at the bottom is genuine uncertainty,
a vacuum into which storytelling is flooding.
The Frontier Labs don't really know what they're building exactly.
The economists don't really know how to model the thing they're claiming they're building.
I wish more people talked about and thought about this subject through that sort of lens.
We're trying to model the economy-wide effects of a technology whose properties
the Frontier Labs can't even really describe yet.
Whatever you think about AI today, be prepared to change your mind soon.
And in the extension of that post in its sub-stack,
Derek writes that artificial intelligence offers its obsessives
a kind of Schrodinger's apocalypse,
which exists in a superposition between the economy is about to change forever,
and from a macroeconomic standpoint, everything still looks eerily normal.
My final reminder for this episode is that in the case of this Schrodinger's apocalypse,
it's not just a question of acknowledging that multiple possibilities exist in the box.
I think we need to recognize at a much more fundamental level
that we have a lot more agency than we give ourselves credit for
to decide and shape which versions of this future come to pass.
For now that is going to do it for today's AI Daily Brief,
appreciate you listening or watching as always,
and until next time, peace!
The AI Daily Brief: Artificial Intelligence News and Analysis
