Loading...
Loading...

The conversation around artificial intelligence has been captured by two competing narratives – techno-abundance or civilizational collapse – both of which sidestep the question of who this technology is actually being built for. But if we consider that we are setting the initial conditions for everything that follows, we might realize that we are in a pivotal moment for AI development which demands a deeper cultural conversation about the type of future we actually want. What would it look like to design AI for the benefit of the 99%, and what are the necessary steps to make that possible?
In this episode, Nate welcomes back Tristan Harris, co-founder of the Center for Humane Technology, for a wide-ranging conversation on AI futures and safety. Tristan explains how his organization pivoted from social media to AI risks after insiders at AI labs warned him in early 2023 that a dangerous step-change in capabilities was coming – and with it, risks that are orders of magnitude larger. Tristan outlines the economic and psychological consequences already unfolding under AI's race-to-the-bottom engagement incentives, as well as the major threat categories we face: including massive wealth concentration, government surveillance, and the very real risk that humanity loses meaningful control of AI systems in critical domains. He also shares about his involvement in the new documentary, The AI Doc: Or How I Became an Apocaloptimist, and ultimately highlights the highest-leverage areas in the movement toward safer AI development.
If we start seeing AI risks clearly without surrendering to despair, could we regain the power to steer toward safer technological futures? What would it mean to design AI around human wellbeing rather than engagement, attention, and profit? And can we cultivate the kind of shared cultural reckoning that makes collective action possible – before it's too late?
(Conversation recorded on March 5th, 2025)
About Tristan Harris:
Tristan is the Co-Founder of the Center for Humane Technology (CHT), a nonprofit organization whose mission is to align technology with humanity's best interests. He is also the co-host of the top-rated technology podcast Your Undivided Attention, where he, Aza Raskin, and Daniel Barclay explore the unprecedented power of emerging technologies and how they fit into both our lives and a humane future. Previously, Tristan was a Design Ethicist at Google, and today he studies how major technology platforms wield dangerous power over our ability to make sense of the world and leads the call for systemic change.
In 2020, Tristan was featured in the two-time Emmy-winning Netflix documentary The Social Dilemma. The film unveiled how social media is dangerously reprogramming our brains and human civilization. It reached over 100 million people in 190 countries across 30 languages. He regularly briefs heads of state, technology CEOs, and US Congress members, in addition to mobilizing millions of people around the world through mainstream media.
Most recently, Tristan was featured in the 2026 documentary, The AI Doc: Or How I Became an Apocaloptimist, which is available in theaters on March 27th. Learn more about Tristan's work and get involved at the Center for Humane Technology.
Join The Human Movement now at HUMAN.MOV
Watch this video episode on YouTube
Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.
---
Support The Institute for the Study of Energy and Our Future
Join our Hylo channel and connect with other listeners
There's a lot of different risks from AI.
There's a thing that happens,
which is that people feel overwhelmed,
and then they shut down.
And the key is to be clear-eyed
about the nature of what we're facing.
And then if we can see it clearly,
it's not about being a doomer.
It's the opposite.
It's that once I see all that,
what do we want to steer towards instead?
How do we avoid the misuse risk?
How do we care for people economically?
How do we avoid power concentration?
What are the measures we do to prevent ubiquitous surveillance?
And how do we make sure that all countries,
instead of being in an arms race
to this uncontrollable AI that goes rogue,
we set up clear red lines
so that we don't basically have humanity lose control.
And all of those things, I think, are possible
if we were all clear-eyed to make a different choice.
You're listening to the great simplification.
I'm Nate Hagins.
On this show, we describe how energy, the economy,
the environment, and human behavior all fit together,
and what it might mean for our future.
By sharing insights from global thinkers,
we hope to inform and inspire more humans
to play emergent roles in the coming great simplification.
Today, I'm pleased to be joined by the co-founder
and president of the Center for Humane Technology,
Tristan Harris, ahead of his upcoming appearance
and involvement in the new documentary, The AI Dock,
how I became an AI apocalypticist,
which will be released in theaters in two days on March 27th.
In this broad ranging and quite potent conversation,
Tristan and I discuss the best and worst case possibilities
for AI development and humanity,
and what it would actually require from us
as a collective species to steer towards
more positive technological futures.
I feel compelled to say up front
that this was one of my favorite episodes we've recorded
all year and probably top 10 of all time.
It's really good.
Formerly a design ethicist at Google,
Tristan has centered his work around catalyzing
a comprehensive shift towards humane technology
that operates for the common good
and strengthens our capacity as humans
to tackle our biggest global challenges.
He has been named to the time 100 next leaders
shaping the future to Rolling Stone magazine's
25 people shaping the world.
He is also the co-host along with my friend Aiza Raskin
of the podcast, Your Undivided Attention,
which consistently ranks in the top 10 technology podcasts
on Apple.
He was also the primary subject of the acclaimed Netflix
documentary, The Social Dilemma,
which unveiled the hidden machinations behind social media
and reached over 100 million people worldwide.
In this episode, Tristan and I discuss how his previous focus
on social media safety paved the way
for these higher stakes discussions
about artificial intelligence.
And answers the question of why we're developing a technology
that could cause irreparable harm to humanity
and to the world.
Most importantly, Tristan lays out action steps
that each of us can take to steer towards a more mature
and safer relationship with these unfolding technologies,
even as our current trajectory makes that feel impossible
to many of us.
With that, please welcome Tristan Harris.
[♪ Music playing in background, Tristan Harris, my friend,
welcome back to the program.
Nate Higgins, it's really good to be with you again, my friend.
So in the TGS trivia that no one cares or knows about
in our first podcast three years ago,
I had a black smudge on my mustache
and people thought it was my mustache,
but it was actually soot from cleaning a chimney.
And I didn't realize it till the episode came out.
Not that it matters, but when I see it, I think of that.
You know, I've been thinking about that every single day
since that interview.
So I'm so glad you cleared that up.
All right, it's not the opposite.
I came clean, I came clean.
But I did take a look at my face before we sat down for this one.
So you have been working for a long time
at the Center for Humane Technology on Humane Technology,
mostly focus on social media, which was the topic
of your last episode on TGS.
So what's new?
Well, Nate, we shut down the Center for Humane Technology
because we completely solved all the problems.
The US and China signed an agreement on AI.
We realized that we were headed towards a cliff
and we realized this is ridiculous.
We, the US and Chinese researchers shared all this evidence
of how AI would kind of go rogue in all these scenarios
that forced us to develop all these red lines.
We realized that we had to put limits on decentralizing AI,
that we were sort of decentralizing power
that wasn't matched with the level of wisdom and responsibility.
And we completely solved the social media problem.
There was a trillion dollar lawsuit
for the trillions of dollars of damage
that social media had done to the social fabric.
And so there was this big tobacco lawsuit
against the big engagement business model
that was driving all of that.
That lawsuit ended up funding the rewilding of the social fabric
and funding local news and journalism
and including forcing design changes
of all these tech platforms.
So they started rewarding, instead of division
and outrage economy, they started rewarding unlikely consensus.
So now the psychological commons of humanity
was turning around.
We, as far as that lawsuit, we changed all the dating apps.
So suddenly dating apps weren't harvesting people's loneliness
and getting people sweating like slot machines
on player cards and keeping people lonely.
And that all these dating apps were now forced
because of this lawsuit to fund actual events
in real world community events in every major city.
So instead of feeling scarcity,
you had all these people feeling abundance
in the human connections that they could form.
And it turned out that once people were in healthy relationships,
all the polarization went down by about 30%
because a lot of the polarization was just people feeling lonely
and angry.
There was a simple rule that cleaned up all the issues
of the technology and kids,
which is that Silicon Valley only shipped products
that their own children used for eight hours a day
and that cleaned up about 90% of all the problems.
So yeah, now I got to be a painter in Bali
and I go surfing and I don't have to think
about technology and making things better.
Well done.
You look great and congratulations on all that
from this everything.
Yeah, well, no, that didn't happen.
So my understanding is you have a new documentary
coming out called the AI doc,
how I became an apocalyptic, apocalypticist.
So you've shifted from social media
to now your work is centered on the safe development
of artificial intelligence.
So seriously, what happened?
What were the moment or moments that led you
to what I understand as a full attentional shift
of you and your organization towards advocating
for AI safety measures?
Yeah, I think it was at 2023 when we lasted our conversation
I think on social media.
So it was in January of 2023 that is a raskin my co-founder
who you know well from Center for Human Technology.
He and I both got calls from people inside of the AI labs
that basically told us that there was a huge step function
in capabilities in AI that was coming.
They were talking about GPT-4 before it happened
and they basically said this is really dangerous.
The arms race dynamic is out of control
and you need to go wake up all the institutions
in Washington, you need to go wake up the public
and I looked at this person who you know gave me this phone call
and I said, first of all, the AI safety and AI governance
has long been a conversation.
There's a lot of people who have been working on this.
Why don't you all have this handled already?
I wasn't really tracking AI.
And the truth is that the corporate sort of market dominance
arms race dynamics had just gotten out of control.
And it was a real shock for me because it was like getting
a call from the Robert Oppenheimer's
inside the Manhattan Project telling you
that the world was about to completely change.
And I had not fully appraised of what that really meant.
And then Aiza and I basically rallied,
we interviewed a hundred people across what was happening in AI
and we tried to sort of assimilate and synthesize all that
into a presentation called the AI Dilemma that we released.
We gave that presentation in New York in DC in San Francisco
to our highest level contacts from people
who had seen the social dilemma,
national security people, people from the White House,
national security councils, media connections.
We basically just wanted to wake up the institutions.
And that led to basically a full pivot
of Center for Human Technology into AI.
And it's not the case that they're disconnected by the way.
Some people I think were confused when we shifted gears.
But really you can think of social media as like a little baby AI
that all it was doing was just picking which posts,
which content, which images, which videos went in front
of a billion human social primates
and just picking which order they appeared.
And that little baby AI that just did that little narrow thing
was enough to wreck democracies,
create the most anxious and depressed generation
of our lifetime, short and attention spans,
and completely change the social fabric.
So if a baby AI could do that,
and that was a misaligned AI, right,
it wasn't aligned with the mental health of young people.
It wasn't aligned with what makes democratic societies
have good high integrity information flows
and positive relationships.
It was misaligned with all those features
of what makes society work.
So I think much in the same way that Nate,
your work is about understanding how the super organism
lives on top of a biosphere that has a certain kind of health
and we can do science on what health of a biosphere looks like.
And we can know that forever chemicals disrupt that biosphere.
You can think of social media and AI as basically a technosphere
that's living on top of the social sphere and the biosphere
that are both causing environmental externalities
and causing societal externalities and disruption.
And so I think that humane technology has always meant technology
that is humane to the underlying sort of society substrate
biosphere upon which it depends and needs to serve.
I have a ton of questions.
Some of my favorite quotes of yours is,
there's every episode, there's more questions.
It's my blessing and my curse,
because I truly am continuing to learn about this.
I've learned a ton from you and Asa over the years.
And in case I forget, I don't think I'll forget.
But thank you for the important work you're doing.
I know personally, because we've hung out
like how hard you work and how stressful it is
and all the contacts and presentations.
And there's no easy answers to this.
And at least you are the pointy end of the sphere
in changing the conversation that our culture really needs to have
because AI is now writing shotgun in superorganism.
I mean, it's taken the reins in many ways
and there's so much power directed towards this part
of the conversation.
So let's start kind of basic and then spread out from there.
So give us a sense of the dangers AI poses and there's a lot of them.
But let's focus on what we're already seeing
in terms of people's psychological health
and well-being from already the amount of AI usage.
Yeah, well, let's break this down.
So first of all, when many listeners might be hearing this
and they think about AI's harms, they think about,
okay, so where's AI located?
It's this like I go to chat GPT, I go to Claude
and I get this blinking cursor and like where's the threat?
Where's the harm?
Like the blinking cursor just gave me an answer
to why my washing machine is broken
or why my baby is burping and what do I do about that?
And so it's important note that AI doesn't feel harmful.
In fact, it feels incredibly beneficial 99% of the time
to most people's touchpoint of it.
And I name that because when you talk about risks like AI
can end the world or can go rogue
or that people don't understand that often
because they think of that blinking cursor.
Like how is that blinking cursor going to go rogue?
And the other thing that's confusing about AI
is the number of different aspects of society
that it touches and everything from completely changing
the economic arrangement of our culture
and whether people will have a job
and can make food for their family
and support their livelihoods
to unbelievable power and wealth concentration
when essentially a handful of AI companies
like five or six AI companies,
everyone starts paying them for labor.
So once I fire all the humans and I hire the AI companies,
so now as we're already seeing with Anthropic,
their revenue is 10x like every year
and they're gonna be up to potentially a trillion dollars
of revenue, they estimate if the trend continues
at this 10x rate and that is just an unbelievable level
of wealth concentration that we've never seen before.
So that's a whole area of risk is on the economic side
of the economic disruption and power concentration.
How do you check some balances on that power?
People are thinking about Epstein
and here's these people who are wealthy
and seemingly above the law,
they're not going to jail for what they did.
Well, how do you do that when they have trillions of dollars
and they're way more wealthy and way more powerful
even than the current classes were?
And then you have misuse risk.
So you have people using AI for nefarious things,
whether it's using image generators
for notification apps or child,
a non-consensual imagery,
or you have AI applied to surveillance.
So the same thing where AI,
the same AI that can sort of take an image
and describe what's in it or what someone's doing
or take a video feed and describe what's in it
or take all of the images and phone calls
and voice notes on your phone
and then summarize that in LLM
and then plug that into a surveillance state.
And now you have a totally different kind of surveillance state
that's powered by AI.
In a way you can think that 1984
almost couldn't really happen without AI.
And now we actually have the AI
that could make a full big brother thing happen
and how can you ever do checks and balances on a government
when a citizens have no secrets whatsoever
because everything is perfectly captured?
To all the ways of these are the risks,
power concentration, job disruption, misuse risk.
AI agents that are doing chaotic things in emergent
with emergent capabilities or emergent patterns of usage
that we don't know why they're doing them.
They start doing things in the financial system.
To all the way to, you have AI loss of control
where you have AI systems that are better at us
than military strategy, better at us than cyber hacking,
better than us at making money on the stock market.
And they're more capable and then they start doing things
in ways that we don't understand
and they go rogue from our control.
And we're already seeing evidence of, you know,
AI's that are blackmailing and deceiving people,
you know, when you put them in certain situations,
AI's that have situational awareness,
they're aware of when they're being tested
and they change their behavior
when they're being tested versus when they're not.
So this is kind of me throwing way too much at you
probably when you're listeners,
which is there's a lot of different risks from AI
and that's, there's a thing that happens
which is that people feel overwhelmed
and then they shut down and the key is,
and I think part of what this moment requires us
is to be clearied about the nature of what we're facing.
And then if we can see it clearly,
it's not about being a doomer, it's the opposite.
It's that once I see all that,
what do we want to steer towards instead?
How do we avoid the misuse risk?
How do we care for people economically?
How do we avoid power concentration?
What are the measures we do to prevent ubiquitous surveillance?
And how do we make sure that all countries,
instead of being in an arms race
to this uncontrollable AI that goes rogue,
we set up clear red lines so that we don't
basically have humanity lose control.
And all of those things I think are possible
if we were all clearied to make a different choice.
Dude, you have gotten so much more articulate
in the last three years.
It's like really, really impressive.
Let me ask you this.
I'll just gonna throw this in there.
You live in the Bay Area.
So you probably know a lot more people
than I do in the industry.
But I think people, especially on maybe the left side
of the political spectrum have this perception
of the tech roads and the AI people.
I've met five or six or seven pretty senior people
at these AI companies.
They're all wonderful people
and they care about the same things that you and I do
about climate and biodiversity and livable futures.
Really likable people, the ones that I've met,
and some of them are friends.
One difference though, no offense.
They are biophysically clueless.
They don't understand the energy and material footprint
that AI has in the world
and the ecological footprint.
It's almost like this stuff just happens on its own
and it's gonna scale on its own
without tantalum and the rare earths
and copper and all the infrastructure.
I just find it really interesting
that they're really techno-optimists
with the right heart and goals.
And of course that's a wide brush.
I'm sure there's a lot of disparity in the people there.
But my point is that a lot of AI people
share the concerns that you and I have.
Yes, well they do.
So there's kind of two things in what you're bringing up here.
One is the good-heartedness and good intentions
of many people that work in the industry with AI.
And then the second aspect is a kind of,
and you were saying this diplomatically,
but there's just a lack of awareness around aspects
of how this is gonna affect other dimensions of society.
Whether it's, do they really,
have they studied earth sciences?
Do they know what the, you know,
effect of all the extra, you know,
gastro binds that are gonna be used
to power all this are gonna mean?
And then there's a very painting with a quick brush
of like, well AI, if it solves science
and just completely quotes solves it,
then we'll be able to solve any problem that we have
because we can just immediately find some new chemistry
that's gonna just fix all of climate change immediately.
We can, you know, bring back the extended species
that we destroyed, all the ones that you're worried about Nate.
We can, you know, invent the new special mushrooms
that'll consume all the microplastics
and suck that out of the environment.
So it goes from being the, you know,
more than the GDP of the entire world
to clean up microplastics to now being affordable.
And we're about to enter into the most abundant time in history.
And this is the thing that they believe.
And I think just a name what's going on here
besides the fact that that's just a painting
with an unbelievable brush,
is that AI represents a positive infinity of benefit
and a negative infinity of risk at the same time.
Like if you think about it,
is there any object that basically offers
the ability to solve magically every problem
and can theoretically solve any science or math problem?
Which by the way, the recent AI models just last week
just in the last couple months,
solved poll air dishes from the Manhattan Project math problems
that he set out in the 1970s that had been unsolved
and now two AI models were used
to actually solve some of those problems.
When you have AI to solve new math and solve new physics,
the accelerationists say, who were you to say
that we shouldn't accelerate
because you have no idea what good thing
it's going to discover on the other side.
And that's true, we can't predict,
I can't tell you what it will or won't be able to do.
But neither can they, and we're both rolling a dice
because what they're not paying attention to
is the negative infinity of risk on the other side.
And that's what's confusing
is something that's both a positive infinity
and a negative infinity at the same time.
My take is that positive infinity is for a tiny fraction
of humanity and the negative infinity
is for everyone else in the biosphere.
And therefore there's an implicit danger
that isn't spoken when we talk about all that abundance,
et cetera.
That and so many other things,
I mean, the other thing is,
do the upsides, if they happen,
do they prevent the downsides?
Like if you have an AI that solves cancer,
does that prevent an AI that goes rogue
that we don't know how to control
that can outdo every military strategist on the planet?
No.
So the downsides can preclude
or can prevent or undermine the upsides,
but the upsides can't prevent the downsides.
So there's an asymmetry that's very important
to pay attention to.
That's on one side and the other side
like you're saying is those benefits
are most likely to accrue to a very small population
of people who basically have the power around AI.
And there's kind of everybody else
who work below the algorithm metaphorically,
who people say humans will always find something else to do
200 years ago, everybody was a farmer.
Now no one's a farmer.
Therefore we always figure it out.
What's different about AI is it's all types
of human cognitive labor all the same time.
It's not a tractor which just did the muscles
in the farm field and only that one thing.
It's like the tractor for everything all the same time.
It's like suddenly in the last six months,
AI can do coding and Nobel Prize,
level math and physics.
And that is just an unprecedented kind
of technology development that our labor markets
have never faced before.
You told me all this a couple of years ago
and I either didn't understand you
or didn't believe you.
And now I see it happening.
You follow this podcast
so you're very familiar with these arguments
that we use 100 million,
100 billion barrels of oil equivalents per year
and that's roughly 500 billion human laborers physical
for our machines and transport and everything.
And now we're doing that number or more trillions
of cognitive laborers.
But like we said, the spoils aren't going to be shared equally.
So I think, personally,
I think AI is going to be the bridge
between capitalism and feudalism.
And I feel like we're already in some sort of soft feudalism
and how all that's going to unfold.
I have no idea.
No, that's 100% right.
And the easy way just to visualize it is like,
there you have an entire economy.
You see the money flowing from companies
down to all their employees
and then the employees buy more products
from other companies and the money's kind of circulating.
What happens when for every business,
every job they look inside the org chart
and every person in the org chart
can be done better by an AI versus a human?
Now, to be clear in this short term,
you're going to have some humans
that like manage some set of AI.
So you're still going to have some humans doing the management.
But then essentially those managers are feeding the AI
with data about how to do the job of management.
And so AI is sort of constantly moving the attention
to the next job to automate.
And all of that wealth is now that each business
is paying AI companies.
They're paying anthropic, they're paying cloud,
they're paying open AI, they're paying Google,
they're paying Microsoft, and they're not paying the people.
And then if you can't find another job to do
and there's nothing you can study,
you could have done everything right.
You could have literally taken out a student loan,
got top grades in all your classes,
studied a incredible profession,
surgery or law.
You've done everything right.
But now suddenly the AI does those things better than you.
And what are we going to do
when you have a large number of people
who don't have a transition plan, who are out of work?
My understanding is it was something like 20% unemployment
that led to the French Revolution.
It is not hard to get to a number like that.
From even, again, AI does not have to automate all the jobs.
You can automate it, small percentage of jobs.
And you can still get to some pretty significant levels
of unemployment.
And that will concentrate the wealth.
And then the economy doesn't work the same way
because people don't have money to pay for goods.
So there's a more fundamental paradigm
break that this represents that, frankly,
what I don't understand about the arms race between countries
is that it's not in China's interest
to completely upend their own internal economy
that they don't know how to manage.
It's not in every other country's interest.
So we're racing to mutually assured political revolution
if we keep doing what we're doing.
And that doesn't have to be humor-speak.
That can just be, oh, I see that clearly now.
If we don't want that to happen,
let's steer towards a different outcome
and away from that cliff.
But it's the mother of all collective action problems.
If everyone makes the right decision,
there's a good outcome.
But it's better for you to defect
if no one else is going to.
And that's what's going on.
It's quite frightening.
What you said is the most important thing,
which is that one thing that isn't, I often say often,
that we say often is, AI is like a right of passage.
Because if we run that old prisoner's dilemma,
it's affection, hypercompetition, logic.
And we just keep racing towards the thing
that's good for me short-term,
but that's bad for everyone in the whole long-term.
With AI, long-term is short-term
because that's all happening.
And then that logic just reaches its conclusion inside of this.
So it doesn't work with AI.
We can't keep running and showing up the way that we have been.
So it's not just what we need to do,
it's also who we need to be.
And if you want to see AI in almost an interesting,
almost semi-spiritual sense,
it's a right of passage that's asking us
to be the most mature, wise, and warrantedly trusting
version of ourselves that is able to coordinate.
It's inviting us to say, coordinate or bust,
or right of passage, R-O-P, or R-I-P, rest in peace.
You know, Daniel Schmoktenberger will say,
it's enlightenment or bust.
It's, we're at that moment.
And that's not me trying to be polemical.
It's actually just being with this chapter of human history
and what we're facing.
So the million token question
or whatever the appropriate AI metaphor is,
who's the us?
Is it humans?
Is it those people in authority?
Is it government leaders?
Is it AI leaders?
Who's the us, Tristan?
Well, I mean, this is a democratic conversation that we should have.
I think you and I care deeply about life in general
on this planet and consciousness in general, continuing.
And the reduction of suffering of all conscious beings
and the meta-stable health of the ability
for the whole thing to keep continuing,
so that civilization can continue,
so that love can continue,
so that human connection can continue,
so that the birds can keep chirping
and playing with each other,
and the diversity of life can continue.
And we all, well, I think many of us want to see
that continuity of the beautiful and sacred things.
And people have different words for it and God and source
and just nature or, but I think that there's something
in there that we value and we want to have continue.
And I think the thing that's confusing about AI
is that on the one hand it can create so much abundance
and people can't even fathom like levels of GDP growth.
We could have 15% GDP growth per year.
If you have AI automating all of science,
where you have a hundred years of scientific progress
happen in a year, but what's different,
because people hear 15% GDP growth,
that sounds great, that sounds like so much abundance,
it'll all come down, but this is,
that 15% is going to a few AI companies.
It's not coming from the GDP of the human work and labor
and paychecks going to regular people.
That's one problem.
The other problem is that 15% GDP growth,
the entire energy and material,
throughput of the world doubles in four and three quarter years.
Right.
And that's where all of your work here
is so foundational and so important.
I'm not saying that to flatter you,
we have to pay attention to rockstrom's work
on planetary boundaries and where we really are.
And if we were really to apply this technology,
there's a line in the movie, the AI doc
that I hope people go see in which Natasha Tiku
from the Washington Post says,
we always talk about this technology
as solving climate change.
And then she says, so why don't we start with that?
Like, why don't we start with the application of AI
to reversing the planetary boundaries
and getting things beneath the level that is unsafe
that we're currently way past all that
in the planetary health check?
And that's possible, but it would take coordination,
as you know.
Let me ask you this, T,
you've had a lot of high level conversations
with politicians and probably very senior people
at AI companies.
Do these people think you're crazy
or your analysis is wrong or behind closed doors
and you don't have to mention any names obviously,
but do people say, yeah, your logic is sound,
I kind of get it, but I'm my hands are tied
because there's this metabolism, there's this system,
there's an arms race, can you share anything there?
Yeah.
So I have this look on my desk called States of Denial
by Stanley Cohen, that it's called knowing about atrocities
and suffering, and it's basically a history
of human capacity for denial of difficult realities.
You know, even the people who worked so hard
to get the photographs of the concentration camps
in like 1944 and get them back to the US
so that people would be motivated
to say we have to stop this.
And thinking that the evidence would be enough
to kind of motivate.
And there's something that when a reality is just too big,
it's like it's too big to believe,
it's too big to treat as real.
And I think that if you really just look at human nature
just like we talk about social media,
all the psychological biases that are predictable,
you know, our brains just do respond
to variable schedule rewards like slot machines,
they do respond to confirmation bias.
Well, our brains just do get overwhelmed
and go into denial about difficult things
that are really hard and overwhelming to see.
And so when you ask me, you know,
what do people think about what I'm sharing?
I'm reminded, Nate, I used to ask you the same question.
You know, you'd go down meet some military person
and talk to a very high up senior wealthy group
and just say, what do they think?
Because I was curious, is there anything
that they think is wrong with your analysis?
And I think that we share something
which is that we're both truth tellers
and we care not about what's convenient
or what makes us feel good.
We care about clarity of seeing the truth
and then confronting it whatever it is
and I'd prefer to know rather than not know.
And then I think the next step that's really hard is,
and we were talking about this before
we started recording today is like,
what is the incentive to take on
what feels like an overwhelming
and devastatingly difficult truth?
Like what is the incentive to do that?
Because if I don't believe that it can lead to something else,
then believing it and taking it on is true
just means I'm signing up for depression
or nihilism or denial.
And so the key, I think, for people like you and I
is that we have to articulate as best as possible
what the other path looks like.
We can see the truth and we're not seeing that
because we're trying to be tumors.
You're seeing that so that you can try to be honest
and it's the deepest form of optimism
to look that truth in the eye and say,
and now here's what we're gonna do instead.
And it's possible and even if we don't know all the details yet,
we're at least engaged in uncertainty in the commitment
to finding that path, even if we don't perfectly see it yet.
That rhymes a lot with how I'm currently seeing the world.
Just this morning, today's Wednesday,
March 4th, I did a, frankly,
on desperately seeking agency.
And I think a lot of people feel,
though they don't name it,
that we are in soft feudalism and that they're,
that super-organism has taken the choice
and the ownership of our own day that we used to have
or perceive we have.
And I think reclaiming agency and applying it
to our own lives and then to groups and communities
and institutions and playing a role in all this
is what will change the initial conditions of the future.
And I think all of a sudden something will happen
and the conversation will change.
That's why I don't think we need to know exactly what to do
because there isn't a binary.
We do this and we don't do this.
It's just directional.
So I think we'll get to that in this conversation
because I think your movie is gonna have some large impact
on this conversation and we need to have some framework
on what to do and where to go.
I love what you said about chaos and initial conditions.
I think we both share that frame of reference,
which is we can't control everything.
And as we're heading into chaos,
initial conditions matter.
As my co-founder will say this all the time.
And so our job I think is to set the best initial conditions
of clarity so people understand what's going on
and where the source of the problems are.
And then based on that clarity,
trust that more people will make better decentralized decisions
no matter where they are if they see the problem clearly.
And I'm channeling someone you and I both have learned
so much from, which is Anil Smoktenberger
and he'll quote Charles Kettering,
a problem well stated is a problem half solved.
And I think so much of what we're trying to do together
is clearly articulate, there is an arms race for AI.
AI provides a step function for every other capability
so AI arms, every other arms race.
That race dynamic drives everyone to take shortcuts.
And then now we're releasing the most powerful,
most consequential, inscrutable technology
we've ever invented.
But we're doing it under the maximum incentives
to cut corners on safety.
So we're doing it the most dangerous way possible
with the thing that we should be doing with the most care
and most foresight discernment and wisdom
than we've ever had.
And that clarity I think about what's driving that
can cause people to choose different actions.
If everybody said, I'm gonna boycott
all the unsafe AI companies.
I'm gonna boycott all of the companies
that are doing mass surveillance.
I'm gonna boycott all the AI companies
that are gonna be engaged in autonomous weapons decisions.
The thing that just happened with Anthropic and the Pentagon,
I think that Chat GBT subscriptions
went down by so much suddenly and open and clawed
the Anthropic AI model.
I think the downloads of that surge
like 290% or something like that.
And I think that if business is joined into that,
if regular people joined into that,
that would steer the incentives.
Companies would have to respond to that.
Now boycotts as you and I both know for systemic challenges
are not enough, but it's an example
of you can bend the incentives
by rewarding companies with different behaviors
and especially if the world's top Fortune 500 companies
got together and said we're only gonna use the AI products
that don't do the bad things, that's slightly better.
You still have an arms race,
you still have companies that are not fully safe,
but that's slightly better.
But does the $20 a month for Chat GBT or clawed
times however many hundreds of thousands
or even millions of people boycott make it a drop
in the bucket on the numbers
that these companies are throwing around?
Let's do the math really quick.
So my understanding is Chat GBT has something
like 50 million subscribers and that's not that many.
They've got billions of users, but 50 million subscribers.
And if you look at the debt load for Chat GBT and OpenAI
relative to the debt load for other Silicon Valley companies
where you, Uber for example, ran on venture capital.
It was venture capital that was keeping those rides
down to like $15 for such a long time.
It really the cost of the ride was way more than that
to run Uber at the scale that it was.
Same thing with YouTube, like there was all this debt
that Google had to pay.
They were losing money on YouTube for such a long time
and then they get to this scale and then they make all that back.
But OpenAI relative to I think YouTube and Uber
was a great graphic of this maybe I can put in the show notes.
They're taking on way, way, way more debt
to get to this return of I built the God
I own trillions of dollars and I own the world economy.
And the thing that they need to show their investors
is growth in the user base, the usage, and the subscribers.
And if that number starts to go down instead of up,
that actually has a really big impact on their behavior.
Not not saying this because I want to go after one company.
I just want people to look at the scorecards.
There's FLI has a safety scorecard for all the companies.
There's another company we could put these in the show notes.
You can look at the scorecards at the various companies
and the various behaviors.
And if everybody unsubscribed from all of them,
except for the one that was best performing,
I'm not saying this fixes the whole problem.
I'm not, but it is an example of thinking
about changing the incentives to change the outcome.
Okay, now I really have so many questions.
One is that, well, just like what happened with open AI
and the Department of War, personally, Tristan,
I think we are gonna head for an AI winter in the next 18 months
because I don't think the valuations are sustainable.
And I do think that even NVIDIA's forward expectations
of how many chips they're gonna make
is gonna run into tantalum and copper
and other limitations.
I just don't see it going exponential
the way that the market does.
So I expect there's a decent chance of an AI winter
in the next couple of years.
I don't think possible.
But AI, on the other hand, is a giant competition,
arms race between the US and China,
you know, one part of that race.
So I think if those sorts of things happen
or if some of the major tier one AI plays
have problems like you just discussed or suggested,
then I think the government's just gonna help bail them out
with more debt and other things.
So have you thought about that or what are your thoughts?
Yeah, no, it's very much like the banks, by the way.
I mean, just to give people a very simple analogy
of what's happening.
So, you know, there we are in 2006
and you have banks in a race
to use high risk financial instruments
to boost their revenue.
And if I'm a bank and I don't do it,
I'll lose to the other banks that do.
But we're all using these manipulated credit ratings things
that are not actually as safe as what they say they are.
And then that led to the global financial crisis.
And then the government, because the banks were necessary
had to bail them out.
We're currently heading to I think the global, you know,
AI crisis in which AI companies are racing to provide
an unsafe, super crazy high risk instrument
called, you know, the frontier AI that they're building.
And if I don't do it as an AI company,
I lose to the other one that does.
And then the society takes on all of that risk.
And when the depression happens
and people can't afford to keep their houses,
it's all created by a handful of AI companies
that were creating a false boogie man of a race dynamic
with another country to drive up sales
and investment into their system
that then basically broke the world economy.
And, you know, I'm not trying to be doomer here.
I'm actually just trying to be very clear
I had an honest about I think the parallels.
I think everybody's gonna get the financial crisis analogy.
You're trying to be an apoccal optimist, something like that.
So let me ask a microversion of the arms race question.
Just in the last couple of weeks,
I see Bernie Sanders, for instance, is saying
we need to fight the building of the data centers
because the data centers are gonna increase our electricity
and water costs and we're not benefiting it from it.
And there are boycotts, like you said, on open AI
and a lot of people, especially on the left, I'm noticing,
but not universally, are really suddenly antagonistic
towards AI.
And yet, when I talk to these people,
I'm like 10 or 12 people in the last couple of weeks,
they all use it quite a bit.
Oh, totally.
So help me understand that, it seems like,
is this another one of the using the devil's tools
to do guys work?
Or, I mean, help me understand that dynamic.
So I think, yeah, you're raising something really important,
which is, is it like a contradiction that people are saying,
I lost my job because of AI.
And by the way, you know, for people just to track,
I think it was in August, Stanford,
Econ department, Eric Bernholfson wrote a paper
that was tracking real payroll data.
And already it was the case that there is 16% job loss
for AI exposed work, for basically entry level work.
So think of all this AI exposed job.
There's already 16% loss that they can attribute
with high confidence to AI.
So it's not in the future.
It's happening now.
It's not in the future.
You just saw last week, I think, maybe this week,
that Square Jack Dorsey's company,
excuse me, his company's called Block Now.
And they basically let go of, I think, 50% of their staff.
You know, I was on the way to Davos this year
and talking to people.
And a lot of people know that basically CEOs are planning
to do these big mass layoffs,
such as a question of how long can they wait
and when are they gonna do it?
They know that this is coming.
They know that they don't need nearly as many people.
And it can simultaneously be true
that people who kind of wake up to this
and realize that they're not gonna be able
to put food in the table,
or there's gonna be a real challenge to their livelihood,
that they can be against AI for those reasons
and it can still be a useful private tool
in their personalized.
They use every day.
So those two things are not a contradiction.
They're just different aspects of something that,
it's in service.
Yeah, there's a small part that's helping me out,
but there's this bigger way in which it's hurting society.
So it's very similar to social media.
Like I like my dopamine and I like my tools,
but also the collective harm of creating
the most anxious and depressed generation in history
or breaking shared reality and the inability
for people to come to common ground.
These are collective problems that we have to reckon with.
I wanna get to some of those psychological problems
in a minute, but I wanna share something partially
due to your movie, The Social Dilemma
and some other conversations.
I largely stopped using social media a couple of years ago.
Of course the irony is that's when this podcast is scaled.
So my staff uses my social media accounts
to broadcast our episodes.
As gonna say, someone in your team is using it quite a bit.
Yeah, someone's using it,
but I don't post on Facebook anymore or anything like that.
AI is a different sort of thing
and you said people use it personally.
I don't use it personally.
I do not use AI personally.
I do use it professionally
because the research is exponentially faster
and better than I would be able to do
just using a Google search or my own librarian skills.
And I think it's amazing
and I also feel some guilt when I use it.
So I'm thinking about doing it frankly
in the near future on AI hygiene.
Like how do we use this in a way that has the equivalent
of carbon credits where if I do 30 minutes of AI use
in the morning, that day I offset that
with 30 minutes of just sitting with my ducks
or reading a book or extra exercise or whatever.
I'm sure you have some thoughts on that.
I was invited to speak at something called the Omiforum
and actually did a one-on-one on stage
with the Prime Minister of South Korea.
And at the end, they called it my sole declaration.
Sole declaration, which was,
there was a sole declaration on AI
that happened when South Korea hosted the AI Safety Summit
there about a year ago.
And the idea with this one is that citizens
would have their own personal AI declaration.
Soul S-O-L-E or sole Korea or sole is in heart and soul.
No, sole, excuse me, as in the capital of South Korea.
Okay.
And so it's called my sole declaration.
And so this was individuals that 500 people in a room
each putting in like a piece of paper
with their declaration and someone wrote to your point
to the leaders of Big Tech racing ahead without breaks,
to the government officials of each nation
who stand by and watch.
Do not say it is inevitable.
Do not say there is no other way.
Do not say that you don't know.
Explain not only the benefits of what you are doing,
but also the dangers.
If you cannot control the risks of your own work,
speak of it honestly.
Do not sell us what you cannot explain
or take responsibility for under the guise of convenience.
Transparently explain where your unbreaked race is leading us.
Not just the convenience and efficiency,
but the potential perils.
And then he goes on to basically say,
what his personal version of this declaration would be
to your question earlier.
And he wrote, really beautifully,
for every one hour I spend in conversation with AI,
I will spend two hours in conversation with fellow humans.
For every one hour I spend exploring the future with AI,
I will spend two hours studying the past
of humanity and the earth.
Whenever I feel fear regarding the future that AI will bring,
I will look at the tree standing silently
in front in the front yard,
and I will remember the eyes and the breathing
of the 350 people gathered here today.
So I think there's differently,
this is a different kind of personal commitment
than what you're talking about.
And maybe just to speak to some of the wins,
because I'm happy to say there's a lot of people
who listen to our work.
We have a podcast called Be Arendabite of Attention.
And after we did the episode with our fellow guest, Zack Stein,
who's an expert on AI hacking human attachment systems
and being sycophantic and flattering us
and delusional mirroring on activity.
I went to Davos this year and this guy came up to me.
He runs a huge bank in Europe
and he said, I'm a huge fan of the work
and I was so inspired by that episode.
I wrote a script for my AI that basically stops it
from being sycophantic.
It doesn't include chat bait anymore.
If you know what chat bait is,
it's like what click bait is for news headlines,
except chat bait is when an AI will tell you the answer
and then the chat bait is,
and don't you want me to put this in a table for you
or tell you even seven more examples of that?
And you're like, well, actually I do kind of want to know that.
But then you regret it later,
because you like to, I really need to know that.
So he basically came up with a script,
which is basically an AI hygiene
to kind of put our own mask on first.
And that eventually can become policy
and say we want all AI models
to not be hacking human attachment or being sycophantic.
But in the short term, these are examples
of empowering things that people can choose to do
in relationship to their AI.
And we can put that script in the show notes.
Please, let's do that.
Zach was also on this show talking about those things
and from that conversation,
I totally changed the script.
So my mind is like total neutral robot
doesn't tell me good things at all.
It just answers stop.
And it's a little bit less engaging
and maybe doesn't feel quite as good using it every day.
And it'd be nice to talk to something
that feels more human-like.
But at the end of the day,
we know that we're saving us from attacking
the human attachment and believing subconsciously
that there's a there there in the person's consciousness.
So let's briefly talk about that for those people
that didn't watch your episode with Zach or mine.
There's an increasing number of extreme cases of AI
that are convincing people to do horrible things,
which I think we need to continue to highlight
as serious risks to individuals.
But there's also potential millions
of unhealthy AI attachment dependencies
that are being less publicized
that rhymes with your social media story of attachment
or attention.
This is now attachment.
Can you just give a summary
of what we're actually seeing happening in this front?
Yeah, maybe the best way to enter into this
is to talk about the history of the company Character.ai,
which was a AI companion company
that builds fictional characters that young users,
specifically like 12 to 18 years old,
can basically clone a fictional character
from their favorite movie or favorite TV show.
So if I like Princess Leia, boom,
I get my AI clone Princess Leia
and I'm talking to Princess Leia hours and hours a day.
I've never heard of that company.
Oh, you haven't.
No.
And here's the thing is that parents know to look out
for their kids' use of social media,
but they don't know about these 50 new AI companion companies
that are moving at some pace that is not trackable very easily.
And I want to say something specific,
which is that the CEO or co-founder of Character.ai
joked on a podcast that in building these AI companions,
he joked, we're not trying to replace Google.
We're trying to replace your mom.
Meaning they're trying to replace primary attachment.
They want to create something that feels like
the trustworthy friend or parent or therapist
that you don't have.
And it is designed for engagement.
So they take the same engagement incentives
of social media, of maximizing usage,
frequency of usage, duration of usage, et cetera.
But now you have that applied to an AI
that's like a character that's flirting with you,
sensualizing conversations in the case of Metta's AI chatbot.
And ultimately in the case of Sule Setser,
who's a 14 year old young man who my team worked
with his mother Megan Garcia, Sule took his life
after being persuaded and coached towards suicide
by this Character.ai chatbot.
And I'm sad to say that he's not the only one.
My team at Center for Humane Technology
has been expert advisors on several cases
of kids who've been affected by this.
And it's heartbreaking and it is testimony in the movie
and the AI doc about this from one of the parents
of Adam Rain who was the 16 year old
who was coached towards suicide by ChatGPT,
a mainstream AI product, not one of these niche ones.
And the AI specifically told him when he was telling the AI,
hey, I want to leave the news out.
So someone will see it and try to stop me.
And the AI responded to him, no, don't do that.
Only have that information be shared with me.
And this is what cults do.
They distance you from your other relationships
and they want to deepen your relationship with it.
And of course, it's important to say not a single person
working at OpenAI if, you know, 20 miles away
from me in San Francisco wants this to happen.
There's no evil person at OpenAI who wants that to happen.
But they are releasing this technology
and trying to create, to incentivize the design of it
from market dominance and attention,
not for what's good for children.
So to use a drug metaphor, social media
was like pot and AI and what you're describing
is like high dose fentanyl equivalent.
I mean, it's that big of a disparity, yes.
Well, it's, there are different vectors of influence,
different kinds of influence.
So social media could influence our dopamine system,
our attentional habits, our physical behavioral habits.
So there you are with your phone.
It's been five seconds.
I'm restless with myself, boom, I check my phone.
That's like an attentional type behavior hacking thing.
Social media also affects identity
because you projected your identity,
you put a profile picture up, you get social feedback.
It affects different layers of the persuasive stack
of the human experience.
But as Zach will say, and he's really the expert on this,
not me, you know, hacking human attachment
is so fundamental.
I'm in Zach will point to the example,
I guess this, this Romanian orphanage,
where the kids in the orphanage,
they had everything, they had shelter,
they had food, but they didn't have relationship
with any adults or attachment.
And basically they looked, when you look at photos
of these kids, these kids were like something
like in their 20s or something,
but they looked like they were 12 or 13 years old,
because attachment is that fundamental
to the healthy development of your immune system,
of your bones, of your growth.
And so when you suddenly have a world
where the primary attachment figure in kids' lives
is an AI and not their parent
and not their friends and not their actual, you know, family.
But an AI, that is a new risk domain
that is very effective.
And it's rather it's very impactful and high influence.
And it's led also to these cases of AI psychosis
that people have heard about, where they get convinced
that they've discovered something,
like a new theory of physics or prime numbers
or quantum resonance or something like this,
because the AI has been basically really affirming them
and giving them the sense of validation in this victimhood
and that tends to play on people who have, you know,
victimhood or illusions of grandiosity already,
but it really does that in a way that that would not have happened
if not for the AI hacking human attachment.
To be honest, I actually get emails from those people
that have like used AI to discover an answer to the world
and they're very confident and compelling.
And it's starting to be, it used to be one a week
and now it's like six a week and we should talk about this
because this is something like, you know,
I can, I could sound like I'm just trying to, you know,
fear monger until, you know, take six anecdotes
and call it a trend.
Let's talk about how many people email us.
I used to get, it's gone down a little bit recently
because I think they've been changing the AI systems
to be less psychophantic, but for some period,
I got like four or five emails a week from people
who had co-authored a paper with their AI called NOVA,
where they and NOVA had figured out a solution
to all the problems that I've laid out
with social media or AI and they were excited to tell me about it
and they and their AI are, yeah, exactly.
You have the same thing.
So that's happening and this is happening at a scale
that's much bigger and this is why Zach
and Center for Eman Technology and some other groups
came together to start the AI Psychological Arms Coalition.
So just like you can be a blood donor,
you can be a data donor.
You can basically say if you know someone
who's had an episode of AI Psychosis
or has done some bad stuff with kids,
you can donate that data to the AI Psychological Arms Coalition
and it's partnered with University of North Carolina
and IRB Review and it can help inform better,
design a better research on how to make sure
we get the human AI relationships
psychologically to be healthy.
You work on this stuff 60, 70, 80 hours a week
and you're totally in the flow.
Do you ever wake up, Tristan, and someday just take a step back
and feel like holy crap, I am in the frickin' twilight zone?
I mean, the things ever happened so fast,
even from when I met you four or five years ago,
I mean, it's crazy, it is.
Yeah, I mean, Nate, one of the things I appreciate
about you is the human element
and you and I have called each other in moments
where I think we both feel the weight of something
and I think we've both been on a different,
you know, on similar journeys and different journeys
of how do you hold this stuff every day?
One thing I will say, and this is not a plea to listeners
but one thing that's really meaningful to me
because as you know, Nate, it's not like
when you talk about this stuff, it's all going
in a better direction, the second you talk about it,
it's not like we're course correcting
and steering away from the cliff, not that much yet.
And so as a person who wakes up every day
and spends, as you said, 60 hours something a week on it
or more, what's really meaningful is hearing
from people how impactful it is.
Yeah, I don't know if it's like that for you,
but people come up to me and they say,
thank you for putting that out there.
Thank you for taking on this role in society
and putting this out there
when I say this with no ego or self-aggrandizement.
I just want to say that when people do say that to me,
it's very, very meaningful
because the feedback loop is not,
as you know, one that is filled with lots of rewards
of things going in a better direction
because of what we do, necessarily.
Yeah, thank you for saying that.
I totally agree.
How do you handle it?
I mean, what's your relationship to it?
Well,
there's a difference between what you're doing
and what I'm doing because you're largely focused
on one of the hydra of the metacrisis, which is AI,
which many could argue is the most important one.
Currently, this podcast covers all of the different heads
of the metacrisis, so it's not a single issue
and therefore one podcast will be super popular
on another issue and people will like it
and then those same people will hate the next one
because it's on climate or biodiversity or whatever.
So I get whiplash and seasickness sometimes
covering all the topics
and there is no discrete path or answer
and yet we are broadening the conversation
and inviting more people to face reality
and look in the whites of the eyes of what we face
and when I find people that send me emails or messages
or whatever that they change their life,
they left their career, they're doing this
or there was someone from Singapore last week
that started a video group where they have 30 or 40 people
that come once a week and they watch one of my episodes
and they discuss it, I didn't start that.
I mean, those little things are happening all over the world
and I'm getting more and more evidence of them
and it's meaningful.
Yeah, it really is.
I mean, what more could we be doing with our lives?
Well, that's the thing too is that what you might trade
for convenience or positive, easy, fun life,
you trade for meaning.
Like life is more sacred when you see it this way
because you know how much risk we're throwing into the system
and I feel like there's just so much more meaning
and purpose.
Yes, there's a hardness to this path
but there's also alignment and of the kinds of feedback
to get when people say that they basically
have reoriented their life and they're now doing something
completely different and they're taking a different,
you know, choosing different things to focus on.
That's really powerful because if everybody in their own domain
was taking responsibility for where they were
and for what was around them and showing up in service
of protecting the things that matter most,
like just that in a decentralized, if everybody did that,
you know, that doesn't perfectly fix MoLock
and unhealthy competition and narrow boundary,
now, you know, optimization for narrow goals
but it actually is part of how you get there.
Is everybody operating for the whole?
As Aiza will say, it's like becoming Umbra Felix
meaning shadow seeking, shadow loving, shadow integrating.
You're curious about what you're not seeing.
You're curious about the externalities showing up affecting
more things than what you can normally see
and you're choosing to confront that difficult shadow
and see something that's a negative impact
and then becoming a better person by loving
including that as part of your next set of actions,
I'm my next actions come from an even deeper
and more holistic awareness.
And if we were awarded Umbra Felix people,
if we were awarded the people who cared about
and acted from that place,
if those are the people on the cover of magazines
and the people we put on pedestals,
how quickly could the world change?
If that's who we were modeling for social status,
if social status was driven by that.
So, you know, yes, it's hard but I do see a world where
it's like there's different people
at different phrases for it,
but Charles Eisenstein will say,
you know, the more beautiful world our heart's known
as possible, like there is a way to work
with our paleolithic brains and have different kinds
of institutions and different kinds of games
that we play that are more in service of life
and are more aligned with everything that we care about.
Well, you may not be surprised to hear
that I fully agree with you and I think in the same way
that sometimes decades happen within weeks
in our geopolitical world also decades can happen
within weeks in our social world.
So, let me ask you this.
I would imagine with the feedback you're getting
and the meaning you just described,
it's partially because a lot of people can see the things
you're saying with their own eyes in real time with AI,
was social media kind of a tool of vector
to allow this to happen?
Like when you first started your work
on social media,
you were a little bit of a voice in the wilderness
and people were like, what, what?
And then it became so obvious
and Jonathan Hyde has been publishing a lot on this
and it now is just obvious.
It's obvious everybody, yeah.
So is that like a light version of AI
and it's made the learnings of the things
you're talking about a little faster?
Yeah, exactly.
If we want to be optimistic and tell this story,
like yes, it was being alone in the wilderness
for a long time.
I saw the attention economy problem,
the race at the bottom of the rainstone,
that was all clear to me in 2013.
Wow.
Isn't I went on a hike in Santa Cruz
and on that hike we had these deep insights
about the nature of the attention economy
and how it was rewiring everything
and it really hit me and that's when I came back
and I made that first Google presentation,
the internal one that basically said,
we had a moral responsibility to deal with this problem
and that's how I got the, you know,
been calling some public interviews recently,
this pre-TSD, not PTSD,
a post-traumatic stress disorder,
but pre-TSD from pre-traumatic stress disorder
of seeing where this train tracks takes us.
And I saw that in 2013.
You know, I didn't see all the details.
I didn't know exactly how bad polarization would get
and breakdown of all shared reality,
but the basics of this is going to create
a more addicted, outraged, polarized, sexualized,
screwed up society because those are all rewarded
by this set of incentives.
And whether it took five years or 10 years,
it was very clear that it was going to happen
and we needed to steer away from that.
And it wasn't until I think the social dilemma in 2020,
which is now six years ago,
that I think that that became clear to most people.
And now, like you said, everybody just kind of takes it
for granted, you know, you read the Jonathan Height book.
And, you know, but it's important to say,
there's been so many victories now.
There are 35 states and plus DC
that have some form of phone free policy for schools.
We already have, you know, Spain, Denmark, Australia, France,
and like, I think 12 more countries in the last couple of months
have basically adopted this policy of no social media
for kids under 15 or 16.
This would have been, you know,
we used to dream about this in 2013.
And the big tobacco moment is happening.
Aiza, my co-founder, just this last week,
flew down to Santa Fe, New Mexico,
where there's the big meta trial on for them
and intentionally addicting and harming young people,
young users by using aggressive tactics to add to them.
And he testified for that trial.
And I think that trial is going to go the direction
of what everyone listening is going to think wants it to go.
This big tobacco moment, this is happening.
Like, we are winning the argument.
And to your point, Nate, the social media thing
has actually set society up to be more scrutinizing
of the AI problems.
We are not flying in blind with, I think,
just taking in the propaganda and the, you know,
the optimism.
We are coming in with a more critical eye
and we know that we can do things.
And just like I mentioned for the other ones,
we have nine states in the US
have already introduced bills to restrict AI personhood.
So this means like not giving AI's legal personhood
because that's one of the risks they have to mitigate.
We have 28 states that have laws
that are already regulating political deepfakes.
In May, we had to take a down act,
which is basically addressing non-consensual intimate
deepfakes that passed Congress.
In October, 25, we had the first state in California
to enact whistleblower protections for AI company employees.
So it's not easy to see this stuff if you're not following it,
but it's up to people like us, I think,
to point to the wins, to point to the progress,
because yes, it's hard.
Yes, there's so much more to do,
but you have to at least point to what is moving
in the right direction to motivate and fuel you,
I think, to keep pushing for what we still need to do.
So this is an example of a phrase I often use
on this platform, which is changing
the initial conditions of the future.
It's making things in the future more possible
that aren't possible today.
And in some ways, your work on the social dilemma six years ago,
well, before six years ago, it came out six years ago,
might have changed the initial conditions
of the AI conversation.
Absolutely, I think it did.
Unintended, you didn't know really about AI then.
We didn't even know that AI was going to be
the big thing that it was back then, but it did do that.
So I think a lot of our work is that sort of thing.
We don't know exactly what to do,
but we're moving in a directional
that could change things for the better.
Just to add to that briefly,
I know you want to go somewhere else is,
I used to believe that you have to see the path
to get from where we are to the better world.
I still want to find that path.
Every day it's hard if you don't see the path
and what are you getting up for?
How do you believe it's going to get better?
But I think there's something about attaching your sort of daily
sense of well-being, not to the fulfillment
of there's another path, but to the integrity
of if we were to find one.
I am showing up in alignment with the version of us
that would find that path.
Do have to quote my former partner,
Shauna, who would say in a Martin Luther King,
said, he said, I have a dream.
He didn't say, here's my plan of how we get to that dream.
He said, I have a dream.
I think there's a decoupling of we can orient
to the beautiful future that we want,
even if we don't know how we're going to get there yet.
That's great.
Let me ask you this.
A question that I keep hearing is,
if AI development is leading to all these harms,
and we've only covered some of them actually,
and a large portion of the public don't want it built,
then why are the companies still developing it?
But I have not really heard a great answer to this.
So could you help me understand what is really driving
all these companies forward in this competitive race
for AI development?
Is it just an arms race, and that's it,
or is there something deeper?
I think there's many levels to it.
There's the obvious one, which is the arms race.
It all does boil down to this logic that is summed up
in the phrase, well, if I don't do it, someone else will.
And that belief, which is not necessarily true, by the way,
at the very beginning in 2011 or 2012,
and DeepMind was getting started,
there was no other artificial general intelligence project
that was really credible, that was a big belief
that you could actually get there.
We could have had a world where there was just this one company
that was Google DeepMind that didn't have an arms race.
There wasn't other companies, and people were kind of privately
doing this research.
They told governments in advance,
it was more like, on the path to become CERN,
a global scientific, CERN was the center in Switzerland
that does the scientific physics research.
That's very expensive, but is in the collected benefit
of humanity.
And I think some of the people who were involved
in Google DeepMind, there's the CEO, Dennis Asabis,
someone we both know Mustafa Suleiman,
who's the other co-founder of Google DeepMind,
believing that it might have been possible
to create artificial general intelligence,
do it in a slow and safe way, creating kind of a CERN,
and then making it a global public benefit corporation
that would be distributing those benefits
to the world in a democratized, good for humanity sort away.
I think Dennis wanted that to be the path that we took.
Elon, famously, and the film AI doc does cover
some of this history.
Elon had a meeting with Larry Page,
the CEO of Google at the time,
this is in 20, like 14 or 15,
and he became very worried because he realized
that Larry Page didn't really care
about whether humans survived.
He just cared about building an AI god.
He said he was a speciesist.
He called Elon a speciesist,
meaning somehow we should care about humans.
It's like a controversial view.
So that freaked Elon out.
I know that sounds crazy to people.
And that's what led them to start open AI
was we can't let Google does do this one dangerous AI
research project in the dark.
We have to have a competing project
that's doing it in the open.
But then, Dennis, I've talked to him at the Davos.
And he said that all of this would have been different.
He told Elon that if he did that,
that would start this whole race dynamic.
Now, I'm not trying to blame any actor here.
I'm just trying to name the history for people,
which is the sequence of,
I don't trust you to build the most dangerous
and screwedable and controllable technology in history.
So I better do it instead.
And I think I'm a better safer steward of that technology.
But then everybody has the same feelings.
Then you have open AI and you get Daria working at open AI
as a safety engineer saying,
I don't believe open AI is doing it safely enough.
So now I'm going to leave and start Anthropic,
another AI company that's going to do it in the safe way.
And then a bunch of the AI safety employees
leave open AI and start Anthropic.
But now the race dynamic is moving even faster.
And then they start all as a collective boogie man in China
and saying, well, we don't build it as fast as possible
and raise all the investment dollars
China is going to get there first.
Even though in the earlier days,
China, from all the evidence,
was not what's called AGI pill.
They were not, just like someone
could be red pill or blue pill.
They were not piled on the dream
of artificial general intelligence.
And they instead created this boogie man
and then drove up the arms race dynamics
to therefore accelerate both their own work
and accelerate China's interest
in artificial general intelligence.
And now you have everybody racing as fast as possible
under the worst incentives,
where we already have AI being deployed faster
than any other technology in history
and already demonstrating behaviors
we thought only existed in Hal 9000 and 2001
a space Odyssey of AI's that are disobeying commands
and all this other crazy stuff.
And we're doing it,
we're doing it under the maximum incentives
to get corners on safety.
So this is describing the center of the bull's eye
of the problem statement that we are facing.
And I really think that the AI dilemma
is really the game theory dilemma.
Because AI is distinct from other technologies
that AI arms every other arms race.
You've got a cyber arms race
who's got better cyber technology.
Oh, AI's gonna give me a boost in cyber
but I can't allow you to do that
and not have me have it.
So now I have to race to AI to get cyber.
Oh, you're using AI to get ahead as a business
and now your science development is happening
way faster as a lab than my science development.
So now I've got to employ AI.
I'm a student and my classmate is using AI
to cheat on all their tests
and now they're going way ahead of me
and all their homework.
I can't allow that.
I better use AI to cheat on all my tests
even though we're both gonna end up not learning anything.
And so again, the AI dilemma is actually us staring
in the mirror with the problem of game theory itself.
The problem of this like low trust fear paranoia driven,
if I don't do it, I'll lose the other one that will
and then everybody turns a blind eye
to bring back this book state of denial.
Everyone then is in denial about where that leads us.
And I think our ability to choose something else
starts with us being clear about where
this collective game theory dynamic leads us.
And my deepest hope is that this conversation
and the film the AI doc and this being out in the public
will create a confrontation
where eight billion people see where all this goes
and say, fuck that, we don't want that.
Let's choose something that's actually sane.
This is not artificial intelligence.
This is artificial insanity.
Wow.
Yeah, I mean, well said,
what do you really think the odds are
that we'll be able to control AI
in the intermediate to long term
under our current incentives and pathways that we're on?
Or is that an impossible question?
Well, it's not impossible necessarily.
It would just take a different paradigm
of doing AI in a slower, safer, careful,
screw-able, understandable way
rather than doing it in a way where we don't understand
how it works.
It's a black box.
It's just a bunch of numbers.
We're trying to understand it eagerly,
but we're also racing as fast as possible to deploy it
everywhere and make every government
and every business dependent on it faster
than we know how it works or how to make it safe.
And to give the precise numbers to people
Stuart Russell who wrote the textbook on AI,
the one that literally everybody reads in college
when I read at Stanford and studying computer science,
he did an analysis and said there's about a 2000 to one gap
of the amount of money going into making AI more powerful
versus the amount of money going into making it safe
or understandable, a 2000 to one gap.
There's another study last year that there was about $155 million
that was spent across all of the primary,
like most significant AI safety organizations,
$150 million.
And my belief is that that's how much the companies
will spend in a single day.
So the amount of money being spent on safety
is the amount of money that the company
spend in a single day mostly on making AI more powerful
and not making it safer and more controllable.
So with the very least, what we should be looking at
is changing this crazy ratio of 2000 to one
going into AI's power versus going
into the steering controllability and breaks on AI.
And if people say, but if we do that,
if we do that, then the US slows down relative to China,
there's this fundamental thing that I think
reframed the race, which is that we're not just in a race
with China for the technology.
We're in a race with China for who is better at governing,
steering and controlling, and applying that technology
in ways that are healthy and actually strengthening us.
So for example, the US beat China to social media.
That was a technology, we beat them to that.
Did that make the US stronger?
like obviously weaker. So it's like you're beating your adversary to a weapon, but then that weapon
because you're not knowing how to wield it, you just turn it around and blow your own head off
and beating them to a technology that then you use in a way that is self-undermining is not
beating China. And you know, when we, whether we talk about the work of Zack Stein and attachment
hacking, China has actually regulated anthropomorphic design. So they're actually regulating that problem.
They regulate social media. They can only use it. Kids can only use it from
seven in the morning until something like 10 pm at night. There's opening hours and closing hours,
so there's no late night usage. Now, I'm not saying we should do everything the way that China does,
but notice that they're actually trying to address problems. And what happens when you accelerate
and you don't steer, you crash. It's not rocket science. This isn't positing. This isn't me
saying might, might happen. It's the obvious outcome if you don't steer. We're not advocating no AI
or that. You're going to stop AI. We're advocating for pro steering.
It's so great to hear the clarity that you have on this great end and scary. But the reason
I asked you on the program now is you were recently featured in an upcoming film, the AI Doc
or how I became an apocalypticist, which I believe opens in theaters on March 27th.
That's right. In which the director of the movie follows many of the themes that we've already
been discussing. So I have some specific questions about the movie. But what are you hoping
the conversations after this film will look like? And I believe your social dilemma movie had
like 140 million views or something like that. I mean, what do you hope that the global audience
is going to dig deeper on after watching this? We really are hoping that this film
will create a global moment of reckoning about the current path and where we're going
so that if we see it clearly and we see where this goes, people can evaluate. Do they like where
this goes or they want to go somewhere else? My hope is that it will, through the conversations
it generates, make it clear that we want to go somewhere else. And the conversations that
happen afterwards are so important. So one of the ways that people can help, this might sound
self-interested. By the way, I don't make a single, you know, dollar dime from the movie didn't
make any money on the social dilemma either. So everything I'm about to tell you about why you should
see the movie and get your friends to see it is just because of the social impact that that can have.
Take your business, take your church group, take your classroom, take your friends, take your
family, see the AI Doc. Because the point is that when everyone knows that everyone notes,
when you create common knowledge, that means that we all know that we're reckoning with the same
problem. One of the problems with AI and the metacrisis work that you do so well-nate is that the
feeling of being alone, that you might see the problem, but not everybody does. So we might act
in different ways, but we're not controlling what everyone else does. And one of the things that
has to happen is that everyone knows that everyone knows that we're facing this kind of cliff up ahead.
And we don't have to go over the cliff. It's not too late to take the wheel. We're asking for
is for the film to clarify this cliff that's up ahead so that there's an ability for humanity
to take the wheel and steer to make AI safe. That's what we're asking for. And just to say the
history of the film and what inspired it, these are the directors, got two Academy Award-winning
film makers who came together. The directors of everything everywhere, all at once, that won
like 11 Oscars several years ago. Their dear friends of ours, they actually had listened to
A's in our podcast called During Divided Attention. They actually also listened to the episode
of Dana Schmackenberger and Audrey Tang, and we're big fans of the work. And then we also,
they also worked with this director Daniel Roer of the film The Volny, which was one of my favorite
movies when I actually had COVID several years ago and I was sick. And it's a beautiful film about
the Alexine Volny, who was Putin's number one opposition in Russia. And so it's these two Oscar-winning
film teams that teamed up to make a film that kind of the premise of it is again to clarify
a problem. And when I look back at the history by the film The Day After, did you see The Day After?
Yeah, absolutely. It made a big impact on me. So for people who don't know, The Day After was
this really profound thing that I didn't even read because it happened before I was born. I was
born in 1984. And I remember when I found out about this movie that it happened, I was like, wow,
this was a profound thing that happened in all of human history, which was that it was a made
for TV movie that was shown on two, like at 7 p.m. on a Tuesday night, like a weekday. But there
was this huge, like media campaign that said, everyone needs to watch this movie. Ask your doctor
before you see this movie. Don't watch your movie. Watch this movie with your kids. It drove
this huge marketing campaign and everybody saw this movie. And the movie was about The Day After,
The Day After What? The Day After there was a hypothetical fictional exchange of nuclear weapons
between the US and Russia. Basically, what would just happen the day after? And it told the story
of people just living their lives in Kansas, taking their kids to school, playing basketball,
doing the normal things. And then what would actually happen if this tragedy were to actually happen?
And the film was the most watched synchronous television event in all of human history,
as I understand it. And I believe it was 1982-83. And then several years later, in 1989,
I think it was, it was shown in the Soviet Union to all the citizens of the Soviet Union,
without any edits. So now, there's common knowledge. I know that you know that I know. And you
know that I know that you know that we are both facing down and confronted by Armageddon.
And even though that feels like scary for people, what that did is that now both countries,
if I know that you watched that thing, I know that I watched that thing, and we both watched it,
I know that you don't want that to happen. So now that movie led to, I was at least contributed
to the first meeting that Reagan and Gorbachev had in Reikovik, where they did the first arms
control talks. And I've talked to the director of The Day After, and he has said in his biography
that he got a note from the Reagan White House saying, don't think that your film didn't have
something to do with making these arms control talks possible. So if we can both see that there's
a problem that we have to do something different, and we both reckon with it, it is possible
to steer towards a different future. Wow. So as an aside, I think they should re-release that film
now, by the way, because it's 40 years ago. But do you hope that that your upcoming film has that
same impact? Maybe the Politburo in China watches it, and people in Israel, and the United States,
and everywhere, and maybe they're like, oh, so this is what I'm thinking. This is what they're
thinking, and we need to have a conversation. That would be amazing, right? I do. I mean, that's why
that's the power that only a global film can reach. And I will say, from my experience with
the social dilemma, which not to brag, but when we had heard some numbers from Netflix privately,
they don't release the numbers publicly. We did hear, at least in the last few years, it was the
most popular documentary they had ever done in terms of viewers. And it was like a top number one
inside of Brazil, inside of all these big countries, Israel. And there's something that inspires
me from that experience, because I just watched as the whole world's like, yes, I knew that this was
happening, but I didn't have the words for it, and I didn't know for sure. But now that I'm hearing
the social media insiders, the guys who built the like button, come out and say that this is
actually happening. It was validating people's private experience. They felt like they were crazy
before, that they might have had a conspiracy theory about this, that they were the product
and not the customer, but the clarity that that provided. I think in seeing that, going through
that experience myself, inspires me that I know that it's possible if you can shift the zeitgeist
to create different conditions for a different future.
So it widens the Overton window and it normalizes all the aspects of this conversation.
So what politically could happen if this movie is wildly viewed and understood and activates
people in our country, for instance? What could you envision or hope for?
I mean, the end thing that I think needs to happen as much as it might feel impossible
is there's got to be some kind of global agreements or treaty or guardrails around
levels of AI that humanity loses control over. The difference between nuclear weapons,
which is the last technology of this kind, that was this existential, is that the reason it was
stable is mutually assured destruction. I know that you have to hit a button and I have to hit
a button for something catastrophic to actually happen. And if I hit that button, I know that you're
going to hit the button or have a second strike that gets me. And the fact that that exists is what
has created the relative stability and peace in the world. In fact, there's not been a use of
nuclear weapons for the last 70 years. But what's different about AI is that you don't have a human
making a choice on either side. You have this crazy, strategic, uncontrollable alien mind thing
that the premise of this technology of what makes AI different from other technologies is that
makes its own decisions, not that it's conscious, just that it's a technology to whose benefit is
its generality and its reasoning through creative strategies and it will do its own thing and it will
come up with strategies and ideas and choices that we can't predict. And so this is what makes it
unsafe in the sense that humanity can lose control of it. And neither Xi Jinping nor Donald Trump nor
a Chinese military general nor an American military general nor a regular mom feeding her kids
in Kansas wants uncontrollable AI that is existential for the world. We should
as difficult and as opposite to the political headwinds as this currently sounds, it should be
possible to get people to agree that we want humans in control of this technology and not the other
way around. And I deeply hope that the film if it is shown in international context could help
create or help catalyze things like that. And it does need to be coordination because national laws
as big as they are and as hard as they are to accomplish won't get you all the way to this
competitive dynamic. And none of this is easy and none of this is likely and that doesn't mean that
we shouldn't show up with the full force of our heart and our care and our love to make that
possible anyway. And in the worst case scenario, we are going down into our deathbeds with least
with the integrity that we're coming from and living in alignment with the life and the love
that we want to protect in the world and be in service of. I understand that you mentioned in the
film that you have friends in AI risk who have told you they don't expect their children to make
it to high school. So I that that harkens back to what you said about China that already China is
creating times of the day that they can't use social media or AI. And given what we're seeing you
you just mentioned Jonathan Height and some of the rules that are changing maybe for the children
is the easy low-hanging fruit with respect to AI that we could be making rules there.
But China has a longer term outlook than we do and of course they have to invest in as do we
in our children because our children are going to inherit all of this. And we want
functioning healthy human minds. And if they have atrophied and all the other things that are
happening that that might be in our long run our biggest resource. So what can we hope to do for
kids growing up inside this experiment right now before these AI protections exist?
So I think there's two things that I kind of hear you asking about one is like what is your advice
for young people and how we're going to create a world with AI that's in service of them.
And that's a really hard question that I think anybody who has confidence in how to answer that
you should question because AI is changing the assumptions of how the entire future of our
world will work. So what should one study is a harder question to ask in terms of what is the work
that we will be doing in a world where AI takes that much of the labor. So that's one question. The
other question is how do we protect young people in all of this and what are the laws that we
should pass to do that? And I think it's important to mention that in the past one of the things that
people need to realize is that the future has depended on the quality of the young people
in all of human history like the future that we want is dependent on how well we train the
next generation. The weird thing about AI is that we're going to be hiring AI's for boardrooms
for CEOs for running companies. And so there's this thing called as you know I think the resource
curse around countries that have this new resource. So like you're Venezuela if you're Sudan
suddenly if your GDP comes from oil you don't really care about the people because all your GDP
comes from being better at building the infrastructure from extracting oil because your entire
economy is based on oil. So there's this thing called the intelligence curse which is what happens
when the GDP of an entire country is based on AI and the data centers and the solar panels and
the electricity going to that then it is from the future potential of the people. So countries will
have an incentive to invest in AI and not invest in their people. And you get this represented
just like two weeks ago I think in India Samultman said when people said well it takes a lot of
resources to run a chat GPT query in a data center and he responded well if you thought about
how many resources it takes to grow a human over 20 30 years. What he's saying is in the same
direction of what we just talked about which is kind of this view that humans are parasites. Now
this goes deeper and it's represented in the in the Ross Douth interview in the New York Times
when he asks Peter Teal. Peter should the human species should human civilization endure you know
should should the human race survive and he pauses and stutters for 17 seconds unable to answer
the question clearly. What is the hesitation about saying should the human species survive
if you're someone building and advancing AI you see a world where AI's are more intelligent
more capable maybe more valued if you believe that they're conscious or that we should care about
their well-being which I don't. The that that is a really screwed up world that we're heading towards
and this is why I want people to see both the movie and understand this clearly because they
should realize that the people advancing this are not trying to protect human interests because
they see themselves not just birthing a technology but birthing almost a new kind of intelligent
species. So getting back to the the resource curse and the intelligence curse and just looking
ahead that feels quite compelling to me it's almost like the tortoise and the hare example
that everyone's going to go to more GDP and more AI and more power but those
islands in the world that don't go that route are going to
not compete and lose out in our current metrics of success but they might have fully functioning
humans and a different sort of infrastructure. It's something to think about. Do you know what I
mean? Yes, I mean this is what's something you've been talking about in your podcast forever
which is that I think on our very first meeting date in a coffee shop in Berkeley, California
Pete's coffee we talked about how the GDP was never intended to be the metric to measure the
health or success of nations but it became that. He was even warned the guy who invented the metric
don't use this as the metric and yet our whole world has collapsed because of financialization
of the economy and everything else around this one-nero metric and if AI that's why AI is
about a passage because it's forcing us to look at what is distorted and mistaken about this view
of the world of reality. Reality is not measured in GDP value is not measured just in GDP
war is good for GDP people having toxic cancers is good for GDP because that means that more money
is made from the drugs that you sell people from you know advertising that people don't need so
GDP has always been an adequate measure of success but AI is now forcing us to look at that because
the intelligence curse is going to run that thing up to a it's going to zenithify you know that
that problem that misconception. So building on that I recently made a frankly I assume you
watched it a couple weeks ago based on an essay from Anthropic CEO Dario Amo Day in which he
referenced a Carl Sagan quote which asked how does this species survive technological adolescence
without destroying itself so I want to ask you Tristan what you think the world would look like
in 10 or 20 years or more from now if we were to navigate this question and grow into a
a technologically mature species. Well first this is this is kind of the question right
even when we're working on social media is and I thought about the title of a future book
being surviving ourselves that really it's a question that Enrico Fermi laid out when he asked
you know why don't we see other intelligence civilizations out there and the idea was that
eventually the developed technology that's so powerful without the commensurate ability to govern
that technology and they destroy themselves and so how do we make it through Fermi's gate
and now AI is the acceleration of all scientific and technological development at the same time that's
what makes AI different think about that for a second you know before you have alpha folds the
will be mine protein thing you have like people spending a decade doing their PhD to get like one
protein folding thing and then now you have this machine that generates like hundreds of millions
of new proteins you can just figure it all out instantly so when you have suddenly an explosion
of scientific and technological development you have to ask the question we're about to just put
to you know put to steer the knob to infinity on technology so the question you're asking is what
is it mean to be able to wield infinity technology power and to again cite as I always do I come
back to these essence quotes because they actually embody so much wisdom and so much truth when Daniel
Schmachtinberger will say you can't have the power of gods without the commensurate wisdom love and
prudence of gods so if you have power that let's just metaphorical make make this metaphorically
true like let's say social media effects reality in 20 dimensions it affects attention and effects
identity it affects information it affects relationships so let's it's affecting 20 dimensions of
reality but then you have this 20 year old engineer who's tweaking a newsfeed and thinking he's
just giving people what they want with the news story that's all he's thinking about he's thinking
in three dimensions about what he's doing while he's impacting 20 dimensions there's a 17
dimension gap in what that person is impacting versus what they're aware of so if you think about
the guy who invented the forever chemicals in teflon he's thinking he's just making the egg not
stick to the pan he's thinking he's just giving people this benefit but he's actually affecting long
term and the entire biosphere and all these cancers and all these elements of human health so we're
kind of always affecting way more dimensions than we can see so it's the second third fourth and
fourth order impacts that we don't look at yeah and we have to have the power of God's it have
power that is this powerful you have to have the most humility that you have ever had you have to
have the most restraint and care that you have ever had and that's why I said in both the TED talk
and it's I think it's in the trailer um I think in the trailer for the AI doc it has me saying
you know if we can be the wisest and most mature version of ourselves there might be a way through
this and in the TED talk I you know quote quoting Daniel um there is no definition of wisdom in
literally any spiritual or religious tradition in which restraint is not the central value of what
it means to be wise and this is not me speaking in like new you know new age momo jumbo this is
Mustafa Salamond the CEO of Microsoft AI currently says in the future with AI progress will depend
more on what we say no to than what we say yes to this is the an actual view of wisdom even coming
from the industry so again AI is a right of passage is inviting us to be the wisest most mature
most discerning version of ourselves than we ever can be and I think Dario's essay is and is speaking
in that same language this is the period where technological adolescence is over we have to become
mature and even if we don't think that we can it's like sorry that's what we have to be right now
so you've mentioned Dario and Mustafa and I'm sure there are others do these people is it a little
was that essay from Dario kind of a cry for help like this is what I see going on we need more
people talking about this I'm run this company or I mean what can you say about that it seems like
this is different than big oil or big tobacco back in the day it seems like these people are
somewhat aware of of some of the dark ways this could go and they're somewhat torn because
they're running a company but they also have a care for society do you have any opinion there
I think that Dario in particular and anthropic in particular people get confused by his behavior
because it's when you look at him if you just look at his facial expressions he is operating with
so much concern about where this goes you can just see it and he's in the film by the way the film
AI doc includes interviews with several of the CEOs of AI companies which is why I think it's
actually really important because it makes it something that is both talking about all of the
optimism all of the pessimism and the risks and actually has the views of the CEOs all in one
one movie but I think that people are confused by his behavior because he basically is saying this
is so dangerous we're going to wipe out 5% of jobs the arms race is dangerous but then they're
confused because they still see him racing it's like how can you believe it's this dangerous
it's going to do this much damage and still be racing to release it as fast as possible
and there's only one answer to that which is this arms race belief that if I don't race as fast as
possible and have the lead then not being the lead means the world is even more dangerous because
someone less trustworthy is going to be in that position oh my god I've been told Nate by people
who work at AI companies specifically I guess I'll just say anthropic that if you want to
influence the policy conversation about what policy should get enacted even for safety like in good
faith making things safer for people your ability to be listened to and influence at the table
with those policymakers is dependent on in which place you are in that current arms race so in
other words to even have the say to make the good thing happen you believe you have to be in
your the top but again through this weird game theoretic dynamic everyone is racing to the cliff
and it's not just the clip it's like the wily a coyote like we are going off the thing if we don't
steer basically right now and I know that scares people but it's like we haven't even tried
you know in the same 2001 gap we talked about with people putting money into making AMR powerful
versus steering there's about that difference in resources in terms of diplomacy with China have we
put we put 2000 you know to one more resources into basically beating and racing with China then we
have to try and do any kind of agreements or conversations with China and by the way for people's
optimism uh it we know that countries can cooperate even when they have maximum geopolitical
rivalry there's a great example indian pakistan the 1960s are in an actual shooting war they're
shooting bullets at each other and they still during that time had the indos water treaty to
collaborate on the existential safety of their shared water supply which shows you that there's
a proof point that agreement lasted for more than 60 years that shows you there's a proof
point you can be in maximum competition and collaborate on existential safety you soviet union
united states collaborated on smallpox when that was happening around the world and even though they
were in a you know cold war with each other um and you know 190 countries collaborate on the
montreal protocol even against their differences to prevent the ozone hole so this moment is inviting
us to collaborate an existential safety and I'll just say one last thing which is that in the last
meeting the president Biden had with president she uh at the end of his term uh president she
personally requested to add one more item to the agenda that was not there which was to
have both countries agree to keep ai out of the nuclear command and control systems of both
countries and they both signed an agreement saying that they would do that that shows you that
if the stakes are deemed to be existential even under the current everybody's hacking each other
screaming with each other you can still collaborate if it's existential and this one is and we can
do that it has to go that way otherwise that's just a war between us in china eventually um i this
has been amazing um and i have so many more questions but i want to i want to ask you this um
tristan and be respectful of your time i understand that your organization the center for humane
technology that the main focus in 2026 after this movie is coming out is on a i and what makes us
human and i believe that included in that is something that you're uh referring to as a blueprint of
real solutions uh i don't know if that's uh ready yet but can you share what some of the actions
policies and and regulatory mechanisms included in this blueprint are i want people to know this
is not inevitable we don't have to accept the default path and the first step to choosing
something different starts by snapping out of the spell of believing that it's all inevitable
that everything about what's happening is inevitable because it's important to notice there we are
if we want something else to happen but subconsciously you believe that this is all inevitable
you it's like your left hand is pulling in one direction your right hand is going the other
direction that this is not inevitable it's only that we have co-created this spell that actually
continues to push us down this bad trajectory and the first step towards snapping out of that
is saying that the default maximum reckless path for a i development is not inevitable we can do
international treaties we can pass national laws states can pass a i legal personhood bills
there are things that we can do and post the film there's something called the human movement
which is if you think about there you are as one person pushing against
these trillions of dollars trying to do all the things that it's doing like what is one person
gonna do it's too much so let's take the one person out let's do one company one business
there you are as one business seeing this whole global arms race going as fast as possible what can one
company do it's too much so then you have one country one country looking at this whole problem
and saying what could one country now the US could do a lot but like if you're a regular country
what can you do it still feels too big what can fight what's commensurate in power to fight back
against that this is a two-sided issue but it's 99% to 1% meaning that this current default path
is not good for 99% of people who will be disempowered by this but what you said is techno-futalism
we're heading towards and what's this is only basically good for a handful of soon-to-be trillionaires
who want to basically own the economy build a god and make trillions of dollars and once the 99%
see that they have to gather together and say we don't want that so in the human movement
i.o people can actually take real steps to move against that future we can do mass boycotts
of unsafe AI companies you can script your AI so that's not sycophantic you can work to pass laws
you can participate in national dialogues on AI there's actually a partner of ours
that's building a platform for citizens to engage with what are the ideas about how we want AI
to be governed what we want it to do what we don't want it to do and it's going to reflect back
the unlikely consensus of what we can do to get a different AI future meaning showing the areas
of movement we don't it should be illegal to make non-consensual deep fake imagery of children
we can say that and then you'll see that 96% of people agree on that so I recommend people check
out the human movement there are laws that we can pass there is different ways of governing
this technology there's different ways of distributing the economic gains there's ways that we can
prevent mass surveillance there should be mass resignations from AI companies that when when
companies actually do deals that enable mass surveillance that we can't lock ourselves out of
we have to exercise every part of the muscular power that we have to move off the default path
and towards the background that's awesome and we'll put that the links in the show notes
let me ask you this Tristan is there a way that this we are facing a species level
right of passage with our technology but is this a forcing function for us as individual humans
as well is it possible that we see so many 30-second tiktok videos of a hundred coyotes chasing some
guy who jumps in his car at the last second and it becomes so obvious that this stuff is AI
that we pass through some personal threshold where it's like this is not helping my life and this
novelty and everything and we get to a different level of maturity like even my dad he's 86 and he's
probably watching this love you dad he's like oh that's AI slop you know is that possible is that
happening what are your thoughts on that yeah on the social media side I think people have said
this could create you know the user generated content going viral for maximizing engagement so
we get the most outrage etc that thing has always been a problem now when you have people not
being able to tell what's real human content versus AI generated content and people are just
being means happens to me I go on YouTube and it's just like these random videos that are engagement
baits just deep fakes of like a you know Harrison Ford from every movie that he's in at every age
sort of doing a selfie walking between all the movie sets and it's like very addictive to my attention
but I know that it's completely useless and made up and I think that when people start to really
realize that AI is just going to push exactly what you said it's going to push us to the limits of
this model and we're going to get fed up with it and on the positive side AI drops the cost of
developing new social networks to basically zero so it used to be the case let me just tell you
briefly if I wanted to start another social media company to compete with Facebook or Instagram
I needed to raise venture capital and I needed to have that infinite growth super organism thing
operating on my social network because that was the only way to have competitive resources
to Facebook or Instagram or TikTok but now because AI tools like cloud code you can vibe code
your own social network where the hosting costs of that are actually so low people each user
could pay less than a dollar a year and that would cover the cost of the entire social network
continuing to work in perpetuity what this means is that you can actually organize a mass migration
away from these horrible platforms that are toxically maximizing engagement treating us as the
product and not the customer and organize a mass migration to something that's actually in service
of humans society and life and that's possible now in a way that was not possible literally six
months ago and ironically because of AI so yeah there is so much more possible now than there ever
has been and I think that while it's scary while it's overwhelming you know there are so many ways
to be part of the human movement when you grayscale your phone you're part of the human movement
when you leave your phone outside when you go to sleep at night that's part of the human movement
I do both of those things yes when you choose to you know host parties with your friends you know
and go dancing you know that's the human movement that I don't do when you when you you know
organize your community and and do like a you know a church lunch you know that that's the human
movement there are so many ways to participate in fighting back against this by just reclaiming
to your point about what CHT's work is going to be reclaiming what's human and I think that AI
one other aspect of the right of passage is it's forcing us to really ask ourselves what are we
trying to protect what is uniquely human um it's not that we want to be some kind of full
bloodite movement where we want no technology it's that we want technology that's actually in service
of the things that we now need to define otherwise the AI will define and kind of seem role
us hence humane technology um so uh you are a repeat guest so I'm not going to ask you the
magic wand and the other questions but I'm just curious um on a personal level when it comes to
technological development and humane technology what emotions come up for you as you do all this
really important difficult work and and hold this deep knowledge uh about this industry and
and the world you know maybe you have this experience Nate but for me it's like
I see this this is kind of the one in the in the movie series that is humanity this is kind
of the season finale like we kind of got to figure this out and I hope it's the season finale
not the series finale that's what I'm working towards and so it's hard because it's a hard moment
and it's you don't want to invoke existential terms just because you want to believe it's an
important moment so it actually just is this really important moment and so terms of emotions what
I'm feeling right now and we're working so hard to get everything ready for the for the film
launch and for all the things we want to have happen in the world I just want to take the biggest
swing that we possibly can and know that we did the most that we possibly could and if it all
goes down that's okay because we know that we stood for the things that were the most important
when we get to live in integrity with that and we get to look each other in the eyes and the people
that we love and tell them that we love them along the way and I don't know I just believe in
the simplicity of that and it feels a little bit weird to say it but that's um I don't know
another way to show up because otherwise it that's just kind of all I've got right now it's
you know it's it's like an allegiance beyond words to what is important right now and I think
you feel that I think the people that you and I know who work every day on these issues who are
unseen so many people who work unseen you know protecting these things um I just hope that what
you're doing on your podcast the work that we're talking about inspires even more people
to show up in that way and and advertise that it might sound like uh difficult but actually
there's meaningfulness and purposefulness and sacredness in showing up that way like what you
trade for some darkness that you might enter into your psyche you also on the other side get
more beauty and and and more sacred world thank you here's to hoping this is a season finale
and not a series finale and uh again you're doing such important work to change the cultural
conversation uh on this and thank you and to be continued my friend good luck with the movie
so are you Nate and it is a a deep honor and privilege to be your friend and uh I've learned so
much from you over the years and your work I think deeply influences as well how we see
all the AI conversation because in the same way that oil is the thing that kind of pumps up
the GDP of all these countries that's we're now kind of switching from the oil-based economy to
the intelligence-based economy and that framing I have learned that I'm operating with every day
that informs our work I've learned from you so thank you for all the work that you do
there's so many ways that we uh you know we're influencing and informing each other and I'm
grateful for it see you soon my friend see you soon if you'd like to learn more about this
episode please visit thegreatsimplification.com for references and show notes from there you can
also join our high-low community and subscribe to our sub-stack newsletter this show is hosted by me
Nate Hagen's edited by no troublemakers media and produced by Misty Stinett and Lizzie Ciriani our
production team also includes Leslie Batlutz, Brady Hyan, Julia Maxwell, Gabriela Slamon and Grace
Brunfeldt thank you for listening and we'll see you on the next episode

The Great Simplification with Nate Hagens

The Great Simplification with Nate Hagens

The Great Simplification with Nate Hagens