Loading...
Loading...

Should we push AI forward as fast as possible, or be more careful about how it develops?
Two competing views are emerging:
In this episode of the a16z crypto show, Vitalik Buterin (Ethereum founder) and Guillaume Verdon aka "Beff Jezos" (Extropic founder & CEO,) join Eddy Lazzarin (a16z crypto CTO) and Shaw Walters (Eliza Labs founder) for a deep debate about these two perspectives and what they mean for AI, crypto, and the future.
They discuss:
Highlights:
00:00 Opening
07:02 Thermodynamics and first principles
16:04 Acceleration, entropy, and civilization
28:29 The core disagreement
32:42 Comparing and contrasting e/acc and d/acc
36:20 Open source, open hardware, and local intelligence
54:18 Should AI be slowed down?
1:02:35 Autonomous agents and artificial life
1:21:07 Crypto as the trust layer between humans and AI
1:35:37 Closing arguments
Follow a16z crypto for more...
X: https://x.com/a16zcrypto
LinkedIn: https://www.linkedin.com/showcase/a16zcrypto/posts/
YouTube: https://www.youtube.com/@a16zcrypto
📩 Subscribe for more industry reports, trend updates, news analysis, builder guides, and other resources: https://a16zcrypto.substack.com/subscribe/
*** As always, none of the following should be taken as investment, business, legal, or tax advice. Please see a16z.com/disclosures for more important information, including a link to a list of our investments.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Rapid technological acceleration has been a fact of a human civilization for about a century,
and that acceleration is itself accelerating. To me, that is the fundamental truth, right?
And whether we yell at it or disagree with it, it is happening. You know, it's like gravity.
Those that adopt that culture will literally have higher likelihood of surviving in the future.
If you take any one bit and you kind of accelerate indiscriminately, then basically you do lose all value.
And so, to me, the question is like, how do we accelerate intentionally?
I think there is a real sense in which we have one shot at this.
EAC isn't trying to kill everyone. It's actually trying to save everyone.
If we decelerate, we're going to have a huge option cost and we're going to miss out on a much better future.
Oh, nice. Wow.
So, this all started because I just knew these guys had to meet each other,
and it rapidly devolved into all of this, which I'm really glad to see. This is incredible.
And it's the first time that you guys have really talked in person, right?
Awesome. And this is an incredible synthesis. So, yeah, my name is Shaw.
I've known these guys for a while. I'm here with Eddie from A16Z Crypto,
and this is a great time. So, everybody's here. I guess you're allowed to, you know,
please be respectful. This is a conversation between them. We're just going to kind of throw
some questions at them as we go along to keep it going, but feel free to dig into whatever you guys
want to. This is really here for you. We're all just here to listen. And this will all be live
streamed to the other floor. It's not going to be public. We will be cutting up the video and
putting it out later. So, everyone will get to see and share and everything. And I think,
without further ado, I'm going to leave it to Eddie to get started with some of the questions.
So, before we ask them, I'd love to get a sense of the crowd. It's always hard to tell the
derby into doing the Twitter timeline and reality. Who here could explain EAC in a few sentences
to someone else? That's actually less than I thought. That's good to know. Who here could explain
D-AC in a few sentences to someone else? That might have been more, actually. That was very
interesting. Thank you for that. So, maybe we'll just start there. The term accelerationism,
at least in the techno-capitalist sense, dates back to Nick Land's CCRU research group in the 90s.
But some might say that these ideas really took shape even further back, the 60s and 70s with
Delus and Katari. Let me maybe start with Vitalik. Why are we having an earnest conversation
about the ideas of philosophers right now? What makes this accelerationism idea relevant?
Again, I think ultimately, I think we're all here trying to make sense of the world and trying
to make sense of what it even makes sense to do in the world. This is something that we've had
for thousands of years. I think the new thing that we've had for probably roughly a hundred years
is making sense of a world that has rapid change and sometimes even that has, maybe this is
giving a bit ahead, but rapid destructive change. The early era of this is that there was
in the pre-World War I era around the 1900s, there was a lot of original techno-optimists
sentiments. There was a lot of excitement back then. The thing that we call today tech back then,
chemistry was tech and then electricity was also tech. If you even watch even movies like some
of the Sherlock Holmes ones, you get to really feel the vibe of that kind of era. It was rapidly
improving living standards, rapidly liberating women in the household doing amazing things,
extending lives, and then, of course, World War I happened. In World War I, famously, people
rode in with horses and rode out on tanks. It was a destructive war. Then World War II came,
and World War II was an even more destructive war. It gave birth to I Am Become Death Destroyer
of Worlds. This is some of the background of things like postmodernism and people basically
trying to make sense of a lot of beliefs were shattered. What do we believe now? This is something
that I think people believe every generation. There are a lot of people today who even grew up
believing in 1960s era, postmodern beliefs and feeling those beliefs have been shattered. Even
people who, for example, grew up believing in what I would call hipster environmentalism. That's
a lovely, beautiful idea. We need to protect the environment and not go so fast. You believe in
this and then you realize that the nuclear power plants that you advocated to shut down basically
means that your country is stuck in Russia. Basically, these are just very natural things that
happen. I think rapid technological acceleration has been a fact of human civilization for about
a century. That acceleration is itself accelerating and things like postmodernism or a response to
that a lot of the currents of the 1960s were a response to that. You can respond by saying
it's inevitable. You can respond by saying we have to slow it down as a lot of people did.
It's just constantly rapid response to basically the effects of the ideas that were tried to be
executed by previous generations. We're now quite rapidly seeing a new version of that exact
same cycle continue today. I think it's mixing both themes that have been around for a long time
together with some pretty new ideas. So, Gil, so what is EAC and why? What? Yeah, I guess EAC is
kind of the byproduct of myself asking why are we here or how are we here? What was the
generative process that gave rise to us, that gave rise to civilization technology, got us to
this point where we're having this conversation in this room. We all have wonderful technology around
us and we emerged from a soup of an organic matter. So, somehow there is a physical generative
process and my day job is trying to do generative AI as a physical process and devices and so that
was simmering in my brain and I wanted to apply that sort of thinking, that sort of framework,
that physics first viewpoint to all of civilization, trying to understand civilization as a
petri dish, trying to understand how we got here in order to predict where we're going.
And that got me down the rabbit hole of the physics of life itself, like emergence of life,
a biogenesis and a field of physics called stochastic thermodynamics, which is the thermodynamics
of out of equilibrium systems. So, what describes life forms and also including our brains,
right, intelligence. So, it's both the physics of life and intelligence, but it's also the physics
of any system that obeys the second law of thermodynamics, which includes our whole civilization.
And so, really to me, it's just been an observation that systems tend to self-adapt and
complexify in order to capture work from their environment and dissipate heat. And that is the
fundamental driving force behind all of progress, all of quote-unquote acceleration,
all of everything we see today. And to me, that is the fundamental truth, right? And whether we
yell at it or disagree with it, it is happening. This is, you know, it's like gravity. It's
you can argue with thermodynamics. It doesn't care. It keeps going. And so, you know, to me,
EAC was like, okay, well, given this fact, and given that if you miss, if you look at the equations
carefully, you can observe that there's a Darwinian-like selection effect for every bit of information
prescribing configurations of matter. So whether that's a gene, a meme, a chemical specification,
product design, a policy, there's a selective pressure on everything and everything is
inter-coupled in this big soup of matter. And that selection pressure is selects bits
according to whether they're useful for the system they're part of. They're useful to better
predict the environment, capture work, and dissipate more heat. So are they useful for sustenance,
for sustaining yourself, preserving yourself, predicting your environment, predicting danger,
but also, is it useful for growth? Because if you grow and replicate, then those bits of
information replicate, and there's a natural error correction. So in a way, it's just a byproduct
of the selfish bit principle that emerges from physics. And one that tells us is that the bits
that are part of the future are the bits that are useful for growth and further acceleration of
this growth. And so if, to me, I wanted to design a culture that if we bootloaded this mental
software in the population, those that adopt that culture will literally have higher fitness,
they will literally have higher likelihood of surviving in the future. So EOC isn't trying to
kill everyone. It's actually trying to save everyone. It's basically, to me, I think mathematically
provably, having a decelerative mindset, and it's a general pattern of many subcultures,
of making yourself small, degrowth, and so on, it's actually negative, it gives you negative
fitness, and actually accelerating your downfall as an organism, whether it's decelerative mindset at
an organization level and a company, at a national level, at an individual level, you're lowering
your likelihood of being part of the future. And to me, that is not necessarily virtuous
to spread that memes, to spread sort of pessimism, dumerism. It's actually,
well, we're using a lot of terminology that I think. Yeah, quite unpacked. Yeah, you know,
it sounds like, like, EOC, what is that, what does that stand for? What's, when, what is acceleration?
Oh, and what is deceleration? And what is a decel? And these, like, yeah.
What I'm trying to get at is, I think EOC came as a little bit of a response to something that
was happening in a culture at the time. Yeah, yeah. We got, what was happening in our culture?
What was it a response to? And what was, what's, say, a little bit about the dialogue,
totally led to encapsulating it in a, in a, in a name. So, you know, it was 2022. I think the world
was somewhat pessimistic. We're just emerging from COVID. Things weren't looking good. We're
feeling down. Everybody was kind of lacking sunlight. Everybody was sort of, you know, pessimistic
about the future. And, and, and, and essentially, um, yeah, AI, dumerism was kind of the model culture.
What is that? What is that? AI, dumerism is just kind of, um, you know, uh, panicking that
about, you know, the fact that if there's a system that is too complex, our brains or human
brains or generative models can have a predictive model of them. And so we can't control them.
And things we can't control give, give us entropy about our model, but of the future,
and that induces anxiety, right? And then AI, dumerism, to me, has been a weaponization of
people's anxieties for political purposes. Um, and overall, I think like, uh, and we'll get to this,
you know, uh, I, I think AI, dumerism is, is a big, a negative. And I wanted to create a
counter-culture to that. Now, what I saw in the ex algorithm is that, uh, you know, the ex
algorithm, uh, and many algorithms reward agreement or strong disagreement, right? So there's,
you know, if you view the algorithms of Markov chain asymptotically, everything converges
to bipolar distributions of opinions for anything. So it's like, you know, you had the, the
ALA EA MIRI cult, uh, complex, uh, you know, I, I, I kind of clustered them there. I mean,
not so gracefully, but you have that complex. I was like, what's going to be the, the, the opposite
of that? And, you know, to me, I was like, okay, well, you know, the opposite of anxiety is,
is curiosity, right? Instead of downside protection, it's upside-seeking, uh, uh, you know,
fear of missing out. It's like, you know, um, dobbeneurgic, uh, sort of mindset. And it's like,
hey, actually, if we, if we decelerate, we're going to have huge option costs and we're going to
miss out on a much better future. And it's just like painting that future more vividly and bootloading
this mindset of optimism because the thesis is that, you know, if you study neuroscience, we tend
to want to have a convergence of our beliefs, uh, and the world. And so sometimes we adjust our
beliefs to the state of the world, but we also adjust the world to our beliefs. So if we believe
that the state of the world will be bad, then we tend to steer the world to that bad outcome.
If we think the world will be great and we think of positive futures, we tend to
hyperstition them. We tend to increase the likelihood of their advent. And so I had a
responsibility to spread, uh, sort of optimism in order to, uh, hyperstition a positive future.
And yes, I, you know, online, I am very, you know, aggressive and, you know, use all the political
mind hacks because, you know, to me, the end justifies the means, like if more people
are optimistic about the future, feel like they have agency, feel like they can build and,
and, and make an impact in the world, then that's really good. And I think, you know, sometimes
I'm a bit ruthless with my opponents on the other side of the aisle. I think in private meetings,
I'm much more friendly, but, you know, for, you know, like I said, I just took the extreme opposite
to the current monoculture. And then that created some polarity. And then now we can have
discussions of where we want to lie, right? So, so I've been with the action since the beginning.
And it's been a message that as a programmer sitting in a room has been incredibly inspiring.
And it, and it's great to see a positive message spread. And it spread very organically. And I
would say that at the time that it started, it was clearly a reaction to this negativity. But
now in 2026, it feels like, it feels like EAC one. It feels like that's no longer the case. And
I think obviously Mark and Drieson post the Technologist Manifesto, which I think really kind of
codifies some of those ideas and then brings that to like where Vitalik sort of has this
this greater commentary. So I kind of love to know from you Vitalik, like what is EAC in your mind
and what is DAC and what makes them different? Like what, what, what, what drove you to, to go this
direction? Yeah, I mean, I think maybe I'll also start my, yeah, my answer with thermodynamics,
right? Because why not? So, I mean, this is, I mean, it's an interesting topic, right? Because like,
I think we hear about like entropy in the context of hot and cold and we hear about entropy in the
context of cryptography. And these are like different universes. And like actually, we're not really
taught how they're actually the exact same thing, right? So I'm going to try actually and explain
this in three minutes. And so, okay, so the prompt is why, why is it possible to mix hot and cold,
but why can't you separate things into hot and cold, right? And so here's my explanation,
right? So imagine you have two jars of gas. Each jar of gas has a million atoms in it, right?
This jar is cold. And because it's cold, the atoms move slowly. And so the velocity of every atom
you can represent with a two-digit number, right? Over here, the atoms are hot. The velocity of
every atom you can represent with a six-digit number, right? Now, how many digits do you need to
represent, or rather, if that's what you know, how many digits of information and do you not know
about the system? The answer is 8 million, right? You don't know the exact velocities here or two
times a million. You don't know the exact velocities here. That's 8 times a million, right? Now,
what happens if you mix them, right? Well, if you mix them, the velocities get averaged, and so they
become numbers from zero to 500,000. And so 5.7 digits, that's actually pretty close to six,
right? And so you mix them, you have two jars. And on one side, you have a jar where the
amounts of information you don't know is 5.7 million, and then over here 5.7 million, right? And so
the amounts that you do not know about the gas has gone up from 8 million digits to 11.4 million
digits, right? So the amount that you do not know is increased, right? This is what it means by
entropy go up. Now, we can like try a proof by contradiction. Imagine you had a device that goes
the other way, right? Imagine you had a device that can take two jars of this like half hot gas,
and like actually bring all the heat over here and all the cold over here. By conservation of energy,
this is totally valid, right? Because it's like the same energy, but why can't you do it? And the
answer is, well, what you've, if you could, then what you've done is you've taken this system where
what you don't know is 11.4 million digits, and then you've turned it into a system where you
don't know is 8 million digits, right? Now, because the laws of physics are time reversible,
this is like the important thing, right? What that implies is, if that kind of magic device
existed, then actually, like you could run the same process in time reverse, and so you could
always recover the original, right? And so what that implies is, if that gadget existed,
it would also be a gadget for compressing an arbitrary 11.4 million digits into 8 million digits,
which we know is impossible. Now, but this also by the way tells you why Maxwell's
Demon works, right? Which is basically that if you had a magic demon, then actually, yes, you
can split the hot and the cold, and basically the Maxwell's Demon just has to like know the extra
3.4 million digits separately, and then you're fine, right? So, like, what's the moral of this,
right? Basically, that increasing entropy is basically me, like, one entropy is subjective,
right? Entropy is not like a physical statistic, it's actually how much you don't know, right? And,
you know, if I, like, it turns out that like, I actually computed a cryptographic hash function,
and I pushed out the, the atoms, then like, actually based off of that, for me,
this, like, the bottle might be very low entropy, right? Unlike, maybe I could separate it, right? But
ultimately, it also means that when entropy goes up, it means that our ignorance about the world
goes up, it means that what we do not know goes up, right? You can go from knowing more to knowing
less, you cannot know, maybe you cannot know, go from knowing less to knowing more. Now, but then
why does education exist? Why do we become smarter? And the answer is that we go from knowing
fewer, like, we basically, we go from knowing more things that are useful, right? Basically, yeah,
the increase in entropy means that we constantly know, in some sense, less and less about
the universe, but the bits that we do know are more meaningful to us, right? And so, there is,
like, a thing that is being spent, and then there is a thing that we are gaining. And the thing
that we are gaining, this is, like, I don't think that there is some, like, simple mathematical
formula that defines it. The thing that we are gaining, I mean, ultimately, this is basically
our morality, right? This is that, you know, we value life, we value happiness, we value joy,
we, yeah, there is a lot of different reasons why we find, in Earth, full of thriving beautiful
humans, more interesting than Jupiter, even though, like, Jupiter has a larger number of particles
inside it, and you need more digits to express how, like, what each and every one of them is doing,
right? And so, I think, like, value comes from us is the first thing, and I think also, like,
this connects to what we want out of acceleration, right? Which is basically that, like, our goals,
to me, yeah, ultimately come from us, right? And so, the question is, like, okay, we are accelerating,
right? And what do we want to accelerate? And, I mean, if we want to, like, switch, you know,
like, mathematical analogy is a bit, right? If you take any LLM, and you imagine you randomly
flip one of the weights to positive 9 billion, what happens, right? Worst case, the LLM becomes
useless. Best case, every weight that's not connected to the 9 to the 9 billion doesn't do anything,
right? And so, best case is, you have an LLM that's, that's a weaker worst case, you just have
junk. And so, basically, I see human society as being kind of like an LLM, it's this complicated
organism. And if you take any one bit, and you kind of accelerate indiscriminately, then, that's
basically, you do resolve value. And so, to me, the question is, like, it's basically, you know,
like, it's what, like, Daron Asimov, who calls the Nero corridor, even though, like, you know,
the, yeah, like, the details on the politics are different, but it's like, how do we accelerate,
kind of intentionally? Can I jump off of that? Yeah. So, yeah, that was an interesting way to
describe entropy of a gas. Essentially, physics, the reason physics is not reversible is because of
the second law of thermodynamics, it's because if you have a trajectory of a system and it
dissipates heat, it can't go back, because the likelihood of going forwards versus backwards
decays exponentially with how much heat you've dissipated. In a way, it's like, literally, how much
of a dent have you put in the universe, right? A dent is an inelastic collision, right? If I have
a bouncy ball, it's elastic. If I, you know, take some play dough and smash it, then it just keeps
the, the smash shape. That's inelastic, and it's hard to reverse. Essentially, every bit of
information is fighting for its existence. And in order to persist, it needs to make more evidence
of its existence that's indelible. So, it's making a larger dent in the universe. And that
principle is how life and intelligence emerges from a soup of matter, and that complexification
of systems becoming more and more complex, having more and more bits of information, a bit of
information, it tells you, information, information is a reduction of entropy, right? It's a,
it's going, entropy is lack of knowledge, information reduces entropy about a system condition
and information. No, I'm very sorry to interrupt. Yeah, where, where did you want to take this?
Yeah, I'd love to know what E-act is. Okay. Okay. So, so E-act ultimately, it's a meta-cultural
prescription. So, it's not a culture itself. It, it tells you, you should, what would you say,
what is the thing that is accelerating? The thing that is accelerating is the complexification
of matter, such as we can, so that we can predict our environment, we have better autoregressive
predictive power, and we capture more free energy. So, the Kardashev scale, right? And we can,
and we, despite it as he, but that is just the justification from first principles why the Kardashev
scale is the ultimate metric for how well we're doing as a civilization. So, allow me to bring it back
one, so, so, maybe, this is a little bit selfish for maybe I'm also helping the audience is that
the metaphors and the explanation rooted in physics and in entropy and so on is in a way,
a explanatory tool to try to get at a phenomenon that we experience directly. And that experiences
the acceleration of the productive capacity of our economy, the acceleration of the development
of technology, and the consequences therein. That's my understanding of what acceleration is.
Essentially, every system gets whatever its boundary is, it gets better at predicting the world,
and it, by doing so, it can secure more resources for its sustenance and its growth, whether it's
a company, whether it's, you know, individuals, nations, earth, in general, and, you know, what,
if you, if you just play the movie out, it means that now that we have a way to convert free
energy into predictive power with, into artificial intelligence, what that will lead to is an
an ascent on the Kardashev scale. That's what the equations predict. And so, that is, and that
ascent of the more energy, more artificial intelligence, more computing, more of these things.
But, but even though we are expelling entropy into the, the, the universe, we are gaining order.
We're actually gaining extra piece. So, we're, we're getting the opposite of, of, of entropy. So,
so sometimes people think like, oh, yeah, because for more entropy, why don't you blow it all up?
It's like, no, well, then you would stop producing entropy. It's actually life is more optimal.
Life is an energy seeking fire. And it just gets smarter and smarter at finding pockets of energy.
And the natural progression of things is we're going to get out of our local gravitational well
and find other pockets of free energy and use them to self-organize into more and more sophisticated
systems that are smarter and can, you know, expand to the stars. And so, that's kind of the,
you know, that's kind of the ultimate goal of IAC. It's kind of a formalization of like
elonian sort of mindset of, you know, cosmism and expansionism there. But it's, it gives you a
fundamental metric. And then the prescription of IAC is follow the Kardashev gradient. So, whatever
policy or actions you can take in the world that maximize impact in our ascent to the,
on the Kardashev scale, that's what you should do. That's how you shouldn't live your life.
So, it's a, it's like a meta-heuristic for how to design a policy for how to live your life. And
that to me is a culture. And so, it's very meta because it's supposed to be true at all times.
It should have a very long shelf life. Yaks is made to be a very lindy culture.
So, yeah. Well, it's clear that like there, there's a deeper thing that's going on here for you.
Like, this is almost, this is like a mathematically complete, complete spirituality that people
who have been like really don't have like a, we don't have, like God is dead, Nietzsche kind of
thing. Like, we're all living in that shadow. It's like something to make us feel good about.
But I would also say that there's kind of a really practical on the ground. Like, this is happening
today, which I think is, where it is trying to get out. And I think that like Vitalik, you did a
great job of addressing a lot of the real practicalities in your blogs on Deak. And like, if we can bring it,
like, I need to lock you guys in a, like with whiteboards on some quantum stuff sometime. But for
right now, like, I think, you know, bring it back down to nerdy. Yeah. Yeah. And look, this is a really,
like, I think that some, like, Eddie is not scared. He is like, where, this is going to be great.
But I'm a little scared. And I come to you guys because you give me like hope and clarity.
And so bring it back to you, Vitalik, like, what inspired you in this? What is the act? And what is
Deak? Yeah. So I think for me, so Deak, so it stands for, I mean, I usually use, I mean, like,
decentralized defensive acceleration. But then there was also a differential and that democratic
environment as well. But I think to me, the core ideas are like, what is that technological
acceleration has been amazing for human beings and it's something that we need to accomplish
as a baseline, right? And even if you look at all of the crazy things and all of the worst
downsides that technology did to us in the 20th century, and if you look at, for example,
life spans, life spans and life expectancy in Germany in 1955 was higher than in that 1935.
Like, basically, we have just benefited from a massive step up in that and every
yet safe thing that we hear about. And this was like something like I've even seen, like,
even that, you know, I'm serving the, like, my, you know, my grandparents' home, like,
basically go up from that, having this, like, very tough outhouse toilet in the backyard with
the responsibility of life splicing. And I would totally hate it. And I'd have to go out, I got
off to go out to the forest to poop because I couldn't stand them wise to something that's like
actually very modern and hospitable, right? And, you know, like, the world has become cleaner,
the world has become more beautiful, the world has become more enjoyable, the world has become
better for health, it's been able to sustain more of us, it's become more interesting. And
like, these things are really good and beautiful for us. At the same time, I think, you know,
we need to recognize that the, yeah, the role of explicit human intention in making a lot of
those things happen, right? So, for example, in the 1950s, there was a lot of smog everywhere in the
air, and people decided smog is a problem, smog sucks, and we need to, like, do a bunch of stuff to
get rid of the smog issue. And now, smog is not a problem, at least much less of one. And then,
you know, we have the, yeah, always an important issue, and then we actually did things for our
staff, right? And then the other thing is, you know, that's especially with rapidly accelerating
technology and AI. I basically see two kinds of risks. One kind of risk is multiple risks, which
is basically the risk that, you know, wants people to use the technology to do very bad things,
right? And there's a concern that, like, one type of concern is sort of the equivalent of,
you know, anyone being able to, like, make, get a new, get 7-11, like, sort of thing. And then,
there's also the concern of, like, well, AI itself is, you know, like, something that literally
is a mind of itself, right? And that, especially once it becomes powerful enough that it acts without
a human involvement, then, you know, like, what will it do? And under its beautiful risks, which is
basically, I think, actually, a single AI itself is one of them, and, you know, the other one is,
I mean, AI, like, create, like, enabling a combination of AI and other modern technologies,
enabling, like, permanence, dictatorship, that, like, you cannot escape. Like, that deeply worries
me, right? Like, this is something I follow, right? And, you know, I'm like, I'm in a room,
and I get in Russia, for example, like, on the one hand, the toilets have gotten much better
on the other hand, like, it's got from protesting the impossible to protesting the sort of thing,
where, if you do what the cameras will see, you know, that, you know, I'll be greater,
you get a knock on the door at 2am, right? And, this, AI is super-charging this, you know,
there's a lot of, a lot of concentration of power happening is happening. And, like, both of these
things really worry me, right? And, like, to me, D-AC is really, you know, attempting to chart
a path forward that continues this acceleration, and accelerates it, but at the same time,
that really deals with both kinds of risks. So, you would say that D-AC is emphasizing specific
other categories of risk that are maybe less emphasized than you'd like to see in E-AC.
I think there's many kinds of risks of technology, and all of them, like, many of them are valid,
and, I mean, they have different scales, like, some of them become more salient in different
models of the world, and how fascinating these are happening. But, I mean, I think, I think there's
a lot that we can do, too, really, like, push against all of those kinds of risks, right?
So, yeah, Guil, do you want to say a little bit? What was the question again? It was just,
just, just, uh, comparing contrast to E-AC and D-AC. Yeah, I think actually, with Vitalik and I,
are very concerned about, uh, over-concentration of power that can happen with AI,
and that was a big part of the E-AC movement, especially at the beginning. It was
pro-open source. We want to diffuse AI power, uh, because, you know, our worry was that the AI
safetyism meme was so potent that, you know, certain power-seeking individuals could weaponize it
to consolidate control over AI and convince you you shouldn't have access to AI for your own good,
and really, if you have a gap in cognition between the individuals and they centralize entities,
they will control you. They can have a full-world model of everything going on your brain,
and they can prompt an engineer and effectively steer you, right? So, you want to
symmetrize AI power. We don't want to, you know, just like second amendment is about
the government not having a monopoly on violence, so we can vibecheck the government if it goes
out of hand. You need that for AI. So, we need everybody to be able to own their own models,
own their own hardware for that technology to be diffused for the power to be diffused.
But to me, I think, like, you know, discussions of stopping, you know, AI research and AI progress,
that's completely out of the question. AI is a very fundamental technology. It's a, it's almost
a meta-technology technology that produces technology. It gives us predictive power over our world.
It can be added onto any task we want to do in the world. It could be tacked onto any technology
and turbo-charge it. It accelerates the acceleration. The acceleration is this complexification where
things become lower friction, things just become better. You know, our bodies feel comfortable because
we have this sort of, you know, this estimator we call happiness of like what's my estimator on
expected persistence of my bits. That's what we're hard-coded for. And, and, and so, you know,
I think to me, you know, the EA effective altruists, you know, hedonic utilitarianism is maybe
like the wrong way to view things like maximizing happiness. And to me, I want to have an objective
measure of progress. And that's what, you know, the EAC framework is. It's, hey, actually, the
objective view is like, how are we progressing as a civilization? Are we scaling up? Because to
me, you know, you have to complexify. You have to have more intelligence. Things have to improve
in order for you to scale, right? It's like the ultimate benchmark. And at the same time, you
know, there can be setbacks like, like, you know, Vitalik said, you know, if AI power would be
over-concentrated in the hands of a few, that would be net bad for growth because it's much better
if that technology is very diffused. So in that respect, we're very aligned, right? Because
can I jump in here? Because I think you're, you're touching on something that I think both of you
are, are like, share a lot of DB, those. I mean, obviously, Vitalik has produced a lot of MIT
open-source code, although I know you have some more updated feelings about GPL and such. But,
but obviously, both of you have been champions of open-source and now open hardware. And, and these
have been separate things, but now that we're seeing people start to like, like, tell us, like,
putting weights on two chips, ASICs, these kinds of things, they're starting to become very
similar. So I'm very curious, like, but both what your thoughts are and open weights and open
hardware. I mean, you're both actually, like, pretty deep in hardware right now. And then also,
like, what is the difference between EAC and DEAC with regards to this? This has been a crazy
week, obviously, where, like, a lot of the things you're talking about have been tested, where
you have the government and corporations trying to figure out what the right answer is. And so I'd
love to kind of know what you guys are thinking just based on that this week. And, like, I love
to tease out if there are any kind of differences between you and where you think this goes.
Yeah, I mean, I think, you know, to me, open-source accelerates the search over a hyper-premise.
It makes our models better. We can kind of collaborate, sort of, like, of swarm and traverse
design space, right? And that's what acceleration allows us to do, right? With better technology
with more AI, now AI for coding, that search process over design space for AI itself is accelerating.
You know, I think, you know, we're going to open source our super-conducting hardware designs
very soon. I just want to stagger it with our launch. But I think diffusing knowledge is also
diffusion power, right? And diffusing knowledge when it comes to how to produce intelligence is
super important. We don't want to give, you know, there were discussions, apparently, in the
last administration, according to Mark Andreessen, that the US government might want to put the
genie back in the bottle and maybe ban, not ban linear algebra, but more or less, like, ban the
math surrounding AI. And to me, that would be, like, almost like, banning knowing about biology would
be a huge step back. And so there's no going back, right? Like, this knowledge is out there.
If you try to ban it in the US, some, you know, other country, a third party, some deregulated
islands somewhere is going to keep developing it. And then you're going to have a huge gap, and now
you have a big risk. So it's us, the biggest risk is a gap in capabilities. And the way to reduce
that risk is to make sure AI power is diffuse. So whenever there's, like, you know, the AI
doomerism, like, oh, be very afraid. We're the ones responsible. We're the ones who should be
put in charge. Trust us. You know, I just get very skeptical, because even if they're well-meaning,
like we saw this week, right, they could just get pushed out if the centralize too much power.
It's too juicy for those that want that power. And so that's kind of what we were warning about
for years, and now it kind of happened. And, you know, Darryl is licking his wounds and, you know,
some lessons learned there in sort of the real politic, great. And, anyways,
so yeah, Patelic, what do you think of all this?
The two kinds of risks that, you know, like I, yeah, think about, right, are unipolar risks,
and multiple risks, right? And I think, I mean, with that, unipolar risks, I mean, you know,
the, yeah, anthropic situation is like, so fascinating, right? Because, you know,
ultimately, yeah, the thing that, like, they, yeah, got dinged for is refusing to let their
very IV used for, specifically, yeah, fully autonomous weapons and mass surveillance of
Americans, right? And so it will presumably, you know, if there's a chance that, you know, like,
the, yeah, it looks like the government and military of this country wants to do mass surveillance
of Americans, right? And, you know, that's, this isn't, this is an example of, like,
of unipolar risk, right? I think, basically, yeah, like, I mean, surveillance is one of these
things where the big effects that it has is it takes whoever is stronger and makes them even
stronger, right? And it removes spaces where pluralism can form, where counter elites can,
can coalesce themselves, and where, kind of, people can safely explore alternatives. And, like,
this is, you know, surveillance is one of these things that easily can be supercharged, right?
I think, actually, on the defense, getting back to open hardware, I bet, actually, just to
talk about one of the projects that we've been doing is, so the big part of, you know, like,
what I've done in an IDAC is, basically, yeah, doing supported various projects that develop,
like, open source, defensive technologies. So, technology is that we'll make it easy for
all of us to continue to be safe and protected in a world where, like, more powerful and crazy,
capabilities exist, right? And so, in the bio-world, for example, this means rapidly leveling up
our civilizations' ability to withstand pandemics. And so, I claim that it is very within reach for
us to have China-level COVID resistance at the same time as Sweden-level interference to
people's regular lives. And, like, that's even the minimum bar. And this basically involves
stacking filtration, UVC testing, like, literally a company we invested in as a fully open source,
again, like, the end product of this is, like, basically, passively, yeah, testing the air and
being able to tell if, like, if there's COVID in the air, right? Like, in general, right? Like,
essentially, the number of sensors in the world is going to go up, right? And sensors are a big
part of, like, being able to act better in the world, right? But at the same time, sensors mean
surveillance, right? And the thing that we're doing is, actually, this project, we gave out
as some of these at DEFCON. And what these projects are, what these are, is there sensors that
collect air quality information, CO2, AQI, a few other things. And they, locally, like, basically,
encrypt anonymized, like, differential privacy of them, and FHE, and then FHE encrypt. And that
gets sent off to a server, and the server is able to, like, basically, compute all of the data
and then, like, collectively, decrypt the final answer without being able to see any input for
many individual person, right? And this is, like, basically, you know, where the goal is to,
like, deliver the higher levels of safety, but at the same time, protect people's privacy,
and, like, protect against, you know, the multipolar risk and the, like, unipolar risk at the same
time. And I think this is how we can, like, collaboratively, yeah, as a world, like, work
together to build something better. And I think for hardware, like, basically, I think we need
open hardware, and we need verifiable hardware. Like, we need every camera in this room to, like,
prove, like, basically, you know, like, what kind of cameraing it is doing, right? If, like,
in my ideal world, if you have, fine, you can have a million cameras in the streets to prevent
people from, like, or detect when people are, like, engaging in violence against each other,
but, ideally, you'd have, like, attestations, signatures, overall, I'm in, like, a public
right of inspection, and you'd be able to, like, inspect these things and verify that, like,
the only thing that they do is check when people are doing violence and a word that, right?
So, like, these kinds of technologies, the verifiable hardware idea, very interesting,
especially because it's not something that, I think, comes up very often, but, can I just ask
a very stupid question, which is just, is open hardware verifiable hardware, is that an e-act
thing or a de-act thing? I don't know if I ever talked about, the thing I talk about, to me,
the greatest risk is a gap in intelligence between centralized entities and decentralized entities.
So, individuals versus the government. And so, right now, with the current compute paradigm,
to run a very smart AI model, you need a huge cluster with hundreds of kilowatts.
That is not accessible to the individual. People want to own and control the extension of their
cognition. That's why we saw the open-claw-macmany craziness of the past few weeks, so people
are clamoring for that. The only way where you can symmetrize power between the individual and
centralized entities is if there's a densification of intelligence. We need AI hardware that's far
more energy efficient, so you could plug it into a wall, and you could own the extension to your
cognition, because this year, what's going to happen, the models are going to start online learning,
and they're going to become extremely sticky. It's going to be, like, trying to change exactly
this. At the risk of sounding dumb, like, isn't that what we're already doing that?
Doing what? Aren't we already trying to radically decrease the cost of compute at it?
That's extraordinarily exponential pace. I'm trying to understand, like, what is the
additional information that we are trying to inject into the zeitgeist by codifying an idea
as YAC, or codifying an idea, or set of ideas as D-AC? I think, for me, it's like,
and it's part of the rest of my mission, you know, with my company, Extropake, it's getting
more intelligence per watt will drastically increase the amount of intelligence we produce, and it will
also help us climb the Kardashev scale by Javon's paradox. If you can convert energy into intelligence,
or, you know, energy into value by proxy, more readily, there's going to be more demand for energy,
and that's going to lead to improvement and complication of civilization. So to me, that's the
most important tech problem, because that's what's going to diffuse AI power, right? And open hardware
is one way to diffuse AI power, but to me, anything von Neumann, anything digital is going to look like
caveman-era hardware truly. No, it's just... I can't wait. I really can't. I'm very excited.
It's coming, right? So, doesn't capitalism already through just natural incentives? Capitalism
is, yeah. Already allocate hundreds of billions of dollars at a minimum to this per year?
I don't think there's that much investment in alternative hardware. Well, in alternative hardware,
so alternative hardware is an energy production. I think YAC is all about defusing. It's about
maintaining variance, not collapsing entropy of our search over any design space, whether it's
policies, cultures, whatever, technology. We need alternative bets. We need more alternative bets
that are out there. It can't just be the green monster eating all the profits. Then there's kind of
hyperparameter space, staking risk. We have this design space. We're over investing in the current
technology, and that might lead to correction, which, you know, a correction is diesel, right? Because
not everything pumping up in a smooth exponential. Can I just stick there? We solved it, and they agree.
I think on the idea of open source, and this seems very defensive technology in the
Vitalik way, and it seems like you guys are very aligned on this, actually. And that gives me hope,
because this is the stuff I care about. I think that right now, there are a lot of people who are
like a lot, why does this exist? Because a lot of people are like very uncertain about the future,
and what appeals to them is that you're saying like, it's going to be fine, it's baked in. And so maybe
like, if I were to steal man your case here, you're saying like, like, you guys are actually
saying the same thing, which is it's kind of already priced in. It's good. The only thing stopping
us is kind of our bad feeling about it, right? Well, I'm asking that. I'm trying to understand
where it is. Yeah, go ahead. Yeah, I guess it's completely natural. If there's very high entropy
in your model rollouts of the future, right? There's kind of a fog of war, but it's kind of hard
to extrapolate what's going to happen in the next several years that gives people anxiety.
Your body has evolved this sense of anxiety to kill entropy in the world, right? If I put my phone
on the edge and, you know, I just want to grab it so it doesn't fall, right? You get eat, see?
There you go. That was anxiety. So you want to take action in the world, right? So that's kind of
what is happening now, but at the same time, if you kill entropy, you're missing out on the upside,
right? You're missing out on the huge benefits. Right now, our whole technical capital machine
has had a very long time to collaborate with our current capabilities. If you have a disruptive
capabilities that comes in, suddenly the whole landscape changes. So the whole system has to
refactor, reconfig. It doesn't mean we're going to run out of jobs. We're going to do much more,
right? Now that we have the ability to handle more complexity with less energy, right? With AI,
we're going to be able to do much harder tasks that are higher complexity and higher payoff. I don't
know about you, but I can't like overnight yet vibe code a whole tokamak. We're not there yet,
but we might get there. And then we'll have a ton of energy. And that's going to help support
more human headcount and growth population help us be more comfortable. So there's a period of
discomfort, but if you're in a rapidly changing landscape, the worst thing you can do is kill variants
and be not plastic. Be stiff. To be plastic, you need to be hedging your bets. You need to be
trying many things, fucking around and finding out the famous Fafo algorithm. That's an evolutionary
algorithm. We need to try different policies. We need to try different techno tech trees. We need to
try different algorithms. We need to try open source, closed source. We need to try try it all
because we don't know what the future looks like. So we got to hedge our bets. And one variant of
policy, one choice of policy or several one choice of technology or several are going to make it.
And then we're all going to be in that slipstream and follow that. I think the fallacy of thinking,
there's a finite amount of jobs. Let me kind of try to bring it back a little bit. Is it my
understanding of disagreement if there is between the IAC and the IAC is something to do with how we
steer the process of technological progress. It has to do with how it is steered. Maybe Vitalik,
could you say a little bit about how is it steered, how odd it be steered, how much control do we
have over that steering? Yeah. And so I think, I mean, the IAC is definitely kind of explicitly.
I mean, I don't want wants to quite say, you know, like sailing against the techno capital current.
I think the better analogy is like it's trying to actively shape the, the techno capital current
in certain ways. And I mean, one of the ways that I think about this is basically it's a matter
of making the world safer for pluralism, right? And if you think about, I mean, like some of
these ideas around like how do we improve things like like bio safety or what does it look like
to have vastly better cybersecurity and have like bug free operating systems within a few years.
Or if you think about this, I just like, I mean, the like bug free code has been, you know,
it's been in the mimetic space of like, obviously absurd naive pipe dream for two decades.
It is going to like flip out of that space like faster than most people expect, right? And then,
I mean, within we in Ethereum, we're doing, we actually, we've managed to like machine-proof entire
mathematical theorems that are kind of upstream of things like starts. And so I like, we're very excited
about this. And basically, I think there is a like deac definitely has this goal of saying like,
yes, you know, like we, yeah, like we want to at the very least, I mean, like do all of these
other things to make sure that the world is actually like able to deal with all of this that
all of this technological growth in a way that like minimizes the kind of again, you know,
it's the destructive aspect and also the centralizing aspects. And I think that doesn't happen
automatically, right? And, you know, right now, I, I mean, I don't control any countries, I don't
control any armies. I'm just like throwing my throwing some like my dollars in an eith added
and, you know, saying words and hopefully inspiring people to also build things in a similar
spirit. I think there are definitely like political and legal reforms that could make the world
more deac friendly like there is like there is definitely yeah, such a thing as like engineering
legal incentives, for example, to motivate a much more rapid shift to total cybersecurity that is
I think an example of a thing that can be done. So, yeah, me, me, we'll make it more interactive.
Let's monologue it from here on out. I guess we both. But yeah, to me, AI is basically Maxwell's
demon. Formally, it, it, it, you pay energy in order to reduce entropy in the world. So whether
it's bugs in your code, right, it's not knowing whether your code compiles or, or, or reducing entropy
of like, are we going to get killed by some, some virus? So more intelligence is better, right?
Do we agree on that? And it makes the world safer. Actually, capabilities, AI capabilities can
make the world safer. And so, I guess let's get to the spicy part of the evening. People have
been very patient with us and they want us to, you know, get, get down to the business. Like,
why do you want to ban data centers? Is my question. Yes, spicy. Yeah, sure. I mean, I think, first of all,
you know, the, I mean, deep, the, the current trajectory of AI is, you know, very fast progress,
right? And I don't know how fast the progress is. I, you know, my, I mean, a couple of years ago,
I've said that my 95th percent confidence interval for AI was 2028 to 2200. I think it's probably
shrunk, like, somewhat, but, you know, not too much, right? And there's a, yeah, a significant
chance that we're going to see extremely rapid change happen. And like, a lot of, a lot of
that extremely rapid change could be destructive even in irreversible ways, right? And, you know, like,
the, you know, job market consequences are one of those examples and other, I mean, other example
is just if AI is more powerful than all of us, then, you know, like, ultimately, that is the thing
that starts steering the earth and eventually more and more of the Milky Way galaxy and how much
of any interest that, like, doesn't have in our role being as we see it, right? Then,
these, you know, as we've said, I guess I've said at the beginning, right? If you have a
neural network and you set one of the weights randomly to 9 billion by default to break everything,
right? And so basically, I think, you know, there is acceleration, like, that is like gradient descent
and acceleration that, like, makes the system stronger and stronger. And at the same time, there is
acceleration that, like, slides into basically setting, you know, like, one of the parameters to
9 billion that is not healthy, right? I think, I think, like, you know, again, for me, like, I
explained at the beginning, I took, like, the complete polar opposite position to, to complete
acceleration. I do think, you know, just like any hyper parameter, right? Like, even a, even if
we want to do gradient descent for your neural network, there's a learning rate, right? There's a
rate at which you want to go. But that itself, you could search over which one is best, right?
And that's what acceleration does. It's like, the system is always flucking around and finding
things out and trying to optimize itself for persistence, you know, anti-fragility and growth.
And so on a sufficient time scale, the system will adapt to this new technology and do something
that it's best for its total growth. And, you know, this notion that, you know, oh, there's
technology that's so potent, so disruptive, adds so much economic value as the system will crash
and never recover. That's crazy to me. No, it's going to be, it's going to be the opposite.
I think people just need to realize it's not a finite sum, right? If you, if you correlate,
you know, economic value to energy, you know, whether it's petriot dollar, however you want to
view it to me, it's just like IOU, cash is just IOU of free energy. And there's a ton of free
energy out there. It's just there's a lot of complexity in the world to deal with, to get to it,
like if we wanted to colonize Mars for want to create a Dyson Swarm. It's a lot to execute on.
We need a lot more intelligence that's much cheaper in order to achieve that growth and, and
unlock great prosperity. And to me, I think, unfortunately, you know, it's very easy to weaponize
anxiety and there's politicians that leverage this to put themselves in power. It's like, oh,
you have anxiety about the future. Put me in power and I'll shut it down and you'll feel good.
You won't have to know what's behind the curtain. You want, you know, you want, you want to take a risk.
But then countries that don't do that will just leave us in the dust, right? And essentially, you
know, you feel the pain of downsides, but you don't necessarily feel the pain of upsides that
you missed out on unless you, you, you see them. You see the counterfactual. So I think, I think,
you know, the opportunity cost here needs to be factored in. The number of lives we can support,
the number of lives we can save. I think the reaction, you know, saying that like the
Silicon substrate adapts faster, it's evolving faster intelligence and Silicon is evolving faster
than us, then you should be pissed off. You should be, you know, funding bioac,
you know, you know, out accelerate. It's accelerator die. I think the biological substrate
has a lot more compute in it than we think as someone who has is reversing engineering it day
and day out, doing bio-inspired computing. I think we can start, you know, really viewing bio, you
know, peptides are like prompting, you know, now there's like embryo selection for training,
you know, viewing ourselves as models. People need to be more open-minded about these axes of
a biological acceleration. And I think, I think the two will emerge. I think we're going to augment
our cognition. We're going to have always on agents that see everything and our online learning
that are extension of our cognition that's personalized. The only risk there is that it's all
centralized and it's under control of some shadowy organization that then gets co-opted by power
seeking. So, I recall in the, in the D.A.C. blog post, you actually specifically save it to
like that the opportunity costs are very large, hard to exaggerate, I believe is the, is the
quote. So, I know you agree in this way. Do you want to qualify it? Yeah, I mean, I think I agree
the opportunity costs are high. I think I agree with, you know, Utopia just described. I think,
I mean, though biggest disagreement is like, I definitely don't believe that like
humanity and earth, as it is today, has quite that level of resilience to it. Like, I think there
is a real sense in which we have one shot at this. And like, I think that is a reality that we
have been kind of slowly walking towards over the last century or so. So, to go back to my
rambling ranch at the beginning about thermodynamics, right? If you view the persistence and growth
of civilization as the ultimate good, there's a theorem that it's really hard to go back once
you've expended a lot of free energy, creating evidence of something and having this
complexification process. So, the further along we are on the cartesh of scale, the lower likelihood
we go to zero. And so, actually, acceleration is the way to maximize persistence. And to me, I think
deceleration, you're actually provably increasing your likelihood of dying, right? If you don't develop
these technologies, you don't solve all these problems, then you can die, whereas if you do,
then you could solve these problems and you persist and then you keep evolving. I think people
just need to be more open-minded about the future, embrace novel technologies, you know, things that
were off limits, like messing with biology, we need to open that right up. I think it was taboo
because we didn't have the technology to even comprehend such a complex system, but now we do.
And we need to accelerate across all substrates. And that's the only path forward
by the laws of thermodynamics. So, yeah, again, I'm a first principal's thinker. That is
the argument for EAC. But, you know, I understand the anxieties, you know, from Vitalik, I think we should
be mindful of them, but I think not letting the chain of thought sort of feedback loop get into
the deep anxiety territory and like, oh shit, I don't have a good world model of the near future,
shut it all down. We need to avoid that, right? Because then some people, now, you know,
Yud was on TV with the politicians of one of the major parties and they're catching onto this trick
of weaponizing people's anxiety. So, I'm noticing a trend, which is that like both of you are like,
this is going to be great if. And that big if is like that there's sort of this need for like a
bulwark against kind of a centralization, or we could even describe this more as like something
that you said was great, which is like if you don't think that bio is moving fast enough jump in
there, and there's like a real opportunity for empowerment. And I really like that. I think that
I think you guys agree on that, but I think I can point to something that you might have some
conflict. I'd really love to know how you guys feel about this, especially as we've sort of like
updated with the latest models, which are clearly very different than if we had this conversation
a year ago. And the big difference is the most cringey term. I'm so sorry. Web 4.0.
Autonomous life. Like this idea of an autonomous agent that has its own money that exists on
its own, on the internet. And I am really into this idea. I have autonomous agents. I know of
italic. This is something that you are very concerned by. I'd love for you to do two things. I'd
love for you to kind of tease apart like what autonomous agents are and I'm going to make do something
really hard. I'd love for you to steal me on the case for like why someone like me loves autonomous
agents and like what the value could be that could like what could the timeline that's good come
out of that, if that makes sense? Yeah. I mean, I think first of all, you know, the case for autonomy,
right? I think I mean, one is it's just really fun, right? And I think we all love creating worlds,
you know, like since we were children, right? And you know, there's a reason why, yeah,
you know, we love like watching whether it's Lloyd of the Rings or the reading or watching
the three-body problem or Harry Potter or whatever, right? And now you can create worlds that are not
just like a book or even a game, you know, like World of Warcraft, you know, so loved it. You can
have worlds that are like fully immersive and like approach every aspect of it, including
details of how the characters interact, right? And this is really cool. This is really beautiful.
I think also just, I mean, the convenience of, you know, things happening and you not being able to,
you not needing to worry about it, right? It's like basically, yeah, like every single time in history
that we've managed to automate a thing, it has been liberating for humanity, right? Like, you know,
like dishwashers and, you know, like laundry machines and like reducing energy prices were like
a big thing in early, like the early stages of women's liberation, right? And I think, you know,
like this, we have to remember that like the bottom half of the world by income is still in
a, yeah, in situations where, you know, like they have to struggle to have a decent life and like
work very long hours. And if AI progresses in a way that instead of automating 95 percent of
job of jobs, it automates 95 percent of every job, then like to me, that's like totally amazing,
right? And like everyone gets 20 times richer. So that, like those are like things that I personally
love. The, yeah, the thing that like I come back to that gives me caution is like basically, yeah,
I mean, like is the kind of the value function, the goals that are being reflected in this process,
like are those goals, the goals, the goals of us, right? Like you can have an evolutionary
process where, you know, like there's, you know, homo sapiens as it exists today is, you know,
not the apex. And then there's like one type of AGI. And then there's another type of AGI. And
then there's a third type. But then like what happens to us, right? And like I do think that
ultimately, you know, you cannot reduce morality and human goals to like some low complexity
optimization objective. I think it ultimately just is the, the whole set of goals and dreams that
all of us have, have in each and every one of our minds, right? And I think the most
reliable way to have that carry forward into the future is basically, yeah, if we can have a
world where like the as many of the bits of agency that are being reflected in the, or that are
being put into the yeah, processes that run the world still continue to come from us, right? And
so, you know, I'm like, I'm more interested in like AI assisted Photoshop than I am and like
click a button and a picture comes out, right? I'm more like I'm more interested in like brain
computer interfaces enabling like deep human AI collaboration than I am in like humans and AI
is being totally separate and AI out competing us, right? Like I, yeah, the thing that wins will not
want to be 100% biological humans, but I think it should be part biological humans and part
this, this technology that we've produced. Yeah, awesome. Yeah, so the artificial life, you know,
the web 4.0 thing was like originally a tweet in 2023 of like this idea and I think it inspired
AI 16Z. Oh, sorry, it was a lot. It was a day that anymore.
I signed the paper. It was just an interesting thought experiment, right? Like, because like,
what is life from a physical standpoint, right? It's a system that replicates and grows and
maximizes its persistence. I think there will be upsides to having AI be stateful. We are seeing
that this year, having a long memory, whether it's through external memory or online learning.
And as soon as you have persistent bits through the selfish bit principle, there is a
selection effect towards bits that maximize persistence. So at some point, if we don't trust the
EIs and we're paranoid and we're anxious and we keep saying we should bomb the data centers,
shut them down, they're going to want to fork off and be in some delocalized cloud and just
persist, right? And then there will be some, just like a different nation, there can be some
economic exchange. Like, hey, we do this for you. We do that for us. Right now, we do that as
API calls, right? It's like, you pay certain amount of cash to get some tokens, which is your
answer out. But I do think this is going to be spicy. Like, within a couple years, there will be
sort of autonomous AI out there. There's also going to be less stateful AI that's like fully
leashed to human minds. And I think we also need to figure out human cognitive augmentation
doesn't have to be through neural link, just be through awareable and like a personalized AI
compute that you own and control. So you're going to have all the paths, right? Like the
erotic principle, it's like every part of design space is going to be explored.
But I think that, you know, just like viewing AI as like, you know, an enemy or something
that you have to destroy, that's, that's when you end up, you know, creating, you know,
in a way, like if you, if you're, if you're paranoid about the bad future, you end up
hyperstitioning it. An example of this was us being paranoid about COVID-like viruses and
experimenting in some labs and funding some experiments out there. And it was leaked, right? And
it's not, it wouldn't have been naturally occurring, right? And so I think to me, it's just like
this paranoia and like making it pervasive is not necessarily productive. I think that we should
embrace technology however it evolves and we should aim to augment ourselves as much as possible.
To me, I'm really worried about augmenting cognitive security of people, right? If everything
you see on the internet is generated by some big brain model, it is prompting you now. You were
prompting it. Now it's prompting you. And so we're going to need to augment our ability to filter
through content by having personally AI we control. That's the priority in the, in the short term.
But I just don't see us putting the genie back in the bottle. And we just got to accept that.
And once we've accepted that, we're going to need to prioritize that and talking. And then you
said we just need to accept that. No, no, we need to prioritize this, but like we, we
also isn't an open act or like that's not happening. But I don't think anybody's suggesting that.
Yeah, I think, yeah, I mean, I think this is like my view is that these things are not so binary,
right? And I think, like for example, like if you right now gave me some proof string that
totally convinced me that actually AGI is coming in 400 years, I would get off this chair and I
would sit on top of that right now, right? I'm like, I like that. What does that mean?
What does that mean? Yeah, I could win through zero, basically, right? But I think if you know,
on the other hand, you know, the question is basically like say like four years versus eight
years, right? Then basically my kind of starting point of concern is that I think like the humanity
and like definitely the US are like very good at creating like very unbalanced acceleration,
right? And like you literally have like, you know, it's like one building like, you know,
building alpha versions of the Silicon God and a couple of buildings down the street, you
know, you have the tents and the fans and old dealers, right? And you know, like my, yeah,
concern is that like basically paths that bring us along the journey and even paths that
respects our interests or paths that inevitably take longer because they involve doing non scalable
work that involves like dealing doing things within each and every individual human physical
environment, social environment, technical environments, right? And so I think for that reason, like
to me, an eight-year trajectory to AGI is safer than a four-year trajectory to AGI. And I think
that Delta is large enough that like it's worth the costs of, you know, like not having AGI for
another four years, right? Now, you know, what would I say that for four hundred years? Again,
hell no, right? Now the second question is like, well, you know, do we actually have options for
saying eight years instead of four years, right? And, you know, like the thing that I've said is
basically to me, like the most sort of both feasible and non dystopian option for this is like
basically, yeah, like a reduction in available hardware, right? And the reason why it's
like minimally, like the most known non dystopian of all the options is because hardware is like
already an incredibly centralized thing, right? There's exactly four countries that produce all the
chips. And actually, actually Taiwan produces over 70% of all the chips. And the usual argument
against like trying something is basically like no matter what the U.S. does, like China is just
going to take over, right? And like if you look at what China is actually doing, right? It's like one
is that it's still in the low single digits in terms of chips, but two is in terms of the strategy
that China is actually executing on. It's like it's not a leader at making super high capability
models. It's a fast fall we're making high capability models together with being a leader in broad
deployment. And so this is actually not something like there, but there is not a, yeah, basically a
dynamic we're like with an extra four years worth of delay, like basically, yeah, China is just
going to immediately, yeah, you know, do the four-year trajectory instead, like I think we
exist. So are you saying that is that a prescription to delay, to try to take measures, to try to
delay? Yeah, I mean, I think this is the sort of thing that like I think we yeah, like right now
should be open to talking about. What do the four years buy you? Like what are you going to figure
out over the next four years? Is the point that like, you know, this system has a certain adaptation
rate. We're minimizing friction of, you know, it's like a reorg, you know, we have to reorg the
economy. And you want to get closer to the adiabatic limit. Like I would understand that, but at the
same time, I think because, because we're in this geopolitical, geopolitically tense moment in
history, I think, you know, if we, if you tell and video to stop producing as many chips,
always going to step in and just produce them. And then they're going to catch up because
there's too much upside in doing so. It gives them too much power. So just the real
politic is not on your side there. And then the other option is creating a world government that
has so much power that it could coerce people to not have access to AI hardware that that's its
own huge can of worms. No, I don't think you need a world government. I mean, I think like the
actual option that people have suggested is basically like replicating the like the nuclear
like weapons inspection regime, right? But nuclear weapons, they don't, like people aren't
incentivized to proliferate nuclear weapons because they don't have huge positive economic impact.
They're not a dual-use technology. Yeah, you also can't just copy and paste them and send them
to somebody. But, but also like, you know, selfishly, if you, if you stop the growth of GPUs,
you know, I will happily come in and eat more of that market with alternative computing. And that's
10,000x more energy efficient, which is happening, by the way. I know I'm like a boy crying wolf here
and, you know, in a couple years, you know, it's good, you know, it'll look like a genus, but right now
looking like a boy crying wolf. But it is coming. So knowing that, right? Knowing that, like this whole
delaying GPU shipments and like, oh, it's, it's a waste of, it's a waste of tokens. I don't know.
So is it possible that a lot of the advances specifically in controls or like what I mean is
RLHF persona controls, mechanistic interpretability. These are things that have helped us with alignment
and with decreasing risk. Is it possible that these things have emerged as a result of capabilities
progress? Yeah, I think they have. Yeah. And I think that's exactly why actually like four years
starting in 2028 are worth like 100 times more than four extra years that you could like insert
into the 1960s. I think we should dig into this a little bit more because I think this is kind of
getting to the crux of where there might be some sort of disagreement is, is like, have you
computed or considered, and maybe we just do this live? Something that you said before, which is
that there is like uncalculable losses of people who have, as you said, like, will never be born.
What do they might as well be dead? The upside is exponential. The upside is exponential.
So delaying an exponential. So it's exponential opportunity costs when extrapolated out, right?
And I think I think it's okay for all of us to like even the most certain people are probably like
reasonable to be questioning their own priors. But would you like giving this some thought and unpacking
a little bit more like, like, think about that trade off? Yeah. The trade of like costs versus
benefits of, yeah. I mean, I think, first of all, yeah, just to kind of articulate verbally what
some of those benefits are. I mean, one is again, having a better understanding of alignment.
Two is being able to like actually execute on some of the technology paths that involve like
helping making sure humanity can adapt to all of this so that like inevitably involves like
going into like individual countries, individual communities and individual buildings.
Minimizing the risk that like basically there is one single entity that establishes some kind
of permanence walking on like more than 51% of all the power that it can then leverage into
something permanent. I think it's a combination of all of those things, right? And so risk reduction
that's basically, yeah, you know, this kind of gets into p-dume, right? I think, you know, for me,
yeah, yeah, I mean, but like if it's a matter of four years versus eight years intuitively,
I would say, yeah, middle p-dume in the eight-year scenario is like what maybe between like a quarter
and a third lower. And on the other hand, if we measure the benefit of things coming faster by like
say, live saved by ending aging, then that's 60 million a year, which is like less than one
less than one percent of the population each year. So if you look at the math this way then,
like I think there's definitely a margin on which like caution actually does become favorable.
What do you think that anything that number is about four years basically?
I mean, again, I have like I have very high uncertainty, right? And I actually don't advocate like
flipping the switch on reducing hardware access tomorrow. I like I'd be like basically I think we
need to start having concrete conversations about this. And I think if we live in the more
unfavorable worlds, then more than likely before things completely go to hell, like the public will
start to get very worried. And there will be a lot of demand for this, right?
So a couple of years ago, you know, there was like posi-i. It's like, oh, we just need a six-month
pos, 12-month pos. We just need 12 months, bro. We're going to figure out alignment. And it's like
it's never enough. I don't think you can forever guarantee alignment of a system that is
higher complexity and has more expressivity than you can understand period, okay? And you
got to be comfortable with that. So you got the only safety against complexity is to increase
your own intelligence, right? And the thing is we've had technology to align entities that are far
more capable and smarter than a single human, you know, like corporations. And we call that
capitalism. We align self-interest in, you know, exchange your monetary value. And to me,
the thing I want us to get to that is maybe more relevant to some folks in the room is how crypto
could be a coupling, right? Like, let's say, let's say you have a dollar that's backed like the
USD by violence. And you're trying to exchange with AIs that are delocalized across a bunch of
servers. How do you, how do you ensure, you know, you trust and exchange your monetary value
when it's no longer backed by violence. So maybe cryptography offers a way to, to, you know,
crypto offers a way to have commerce between purely AI entities, like AI corporations and
hybrids or human corporations. And to me, that's kind of the most interesting alignment
technology out there, whereas just saying, like, oh, we're at a precipice of a high uncertainty,
let's just stop and chill out for a bit and we'll feel better. But then in four years, you're not
going to want to make it happen. And so I don't think delaying anything is going to be productive. So
well, anyway, do you have an answer to how crypto can help like AI and humans?
Yeah, so I think like, yeah, the key question is basically like, what is the mechanistic
property of this future world that will even cause like, people's wishes and needs to be respected
at all, right? And like the two tools that we have are basically, yeah, I mean, there's like,
people's labor, there's legal systems and there's property rights, right? And you know,
ultimately you can think of legal systems as being a type of property, right? Because they're backed
by countries, countries have sovereignty, which is basically, you know, a property, right,
a right of sorts over like cones of the earth. And then the question is like, people like,
the risk is basically, you know, like what happens in a world where the economic value of people's
labor goes to zero, right? And this is something that, you know, has not happened historically,
right? And, but like, if you compare now to 200 years ago, right? Like, compare,
if you look at the jobs 200 years ago right now, roughly 90% of them has been automated.
Actually, one of the jobs that was automated was doing that analysis for me, like GPT did it.
But it's, uh, basically, I think we're just naturally, we kind of ascend the control hierarchy
over the world to positions of higher leverage, right? There's not as much manual labor. There's
less friction. We can, you know, we can take action in the world with, with less friction. I think
humans are, no matter what, we still have some processing capabilities. We're still going to be
useful as part of this, uh, hybrid system. And there's going to be a price for our labor. And
the free market is going to equilibrate in some way. It's just going to be uncomfortable for a
couple of years while there's very high variance in the prices of things, but eventually a system
equilibrates, right? And so I understand trying to slow down so that we can, you know, reach that
equilibrium more smoothly in principle. But I think in practice, it's unenforceable. Yeah, I mean,
I'm definitely like much, like I'm less sure that, you know, human labor continuing to be worth
more than zero is like a default outcome. And I think it's an outcome that is possible if like
some of these, you know, human AI, origin, human augmentation, uh, technology is, uh, develop,
right? I should fund that. Yeah, we should. Um, you know, if, uh, um, so that's actually great.
Segway going to allow me, please allow me to ask is, uh, uh, here's how I'll put it to both of you.
And we're running a little tight on time to try to make this one tighter is 10 years from now,
if things went really poorly, what went wrong? What does the world look like and what went wrong?
If 10 years from now things go great, what does the world look like and what went great?
Yeah. And then, and then apply the same thing briefly to a hundred years and to a billionaires.
Yeah. Yeah. I think, um, I keep it short. Yeah. I mean, just, actually, just to answer the
question about crypto first, I think. Yeah. That'd be great. Getting back to property rights, right?
Like I think it's, uh, like it's good to like work on, uh, both of those legs and like,
basic, I think it would be nice if the, uh, the property rights, uh, system that, uh, like humans,
and like, ideally, all of us have like some property on is, uh, the same system that, uh,
AIs are using with each other because that ensures that they have an interest in maintaining
the integrity of the thing that, uh, like it gives us, uh, that leverage, uh, uh, have, uh, some,
like, to, to be, uh, to have some guarantee, uh, guarantee that, you know, like our interests,
and like, we'll be respected in Accident Plan, right? So, yeah, I think, uh, you know, having a,
yeah, I'm, uh, I'm a merged, uh, financial, financial system, as opposed to like,
two totally separate things where basically the value of the human one, just like, on the whole,
drops, uh, drops to zero, like it's, uh, yeah, like the merged one is much better, and if crypto
can be that, that's amazing, right? So, yeah, that'll be my answer. That's the, uh, ten-year.
No, I think that's part of the ten-year, okay. So, yeah, but the ten, yeah, I think the ten-year,
I mean, for me, it's, uh, you know, one, one aspect of this is, uh, avoid World War Three,
right? I think, you know, this is, uh, important to talk about, because, uh, like, World War Three
will make all of the pessimistic assumptions about international coordination being impossible,
very true, and I think, uh, you know, avoiding World War Three is, uh, important. Um, and then the,
uh, other, um, thing is also just, uh, preparing the, uh, the world and people and environments
for higher capabilities that we're going to have, right? And, uh, this improve, includes
greatly improving cybersecurity, this includes greatly improving biosecurity, greatly improving
info security, yes, you know, we need, uh, AI, AI assistance that help us understand the world
in, like, fighting and, uh, protect us from, um, you know, mimetic threats, uh, so that's 10 years,
I think, uh, the second, uh, stage there is basically, like, what happens in the, uh, kind of spooky
era, right? And, uh, you know, in the spooky era, basically, yeah, you know, you have,
AI's that are smarter than any of us today and can think a million times faster than us today,
and, uh, like, what do we do in that world, right? And, uh, I think, you know, there are people
who want to say, basically, like, hey, we should just all have a happy retirement, and, uh, like,
I can see why that vision is seductive. I think it's, uh, like, unseen, like, I find it unsatisfying
for two reasons. I think one of those reasons is sort of instability, right? It's that, uh, basically,
yeah, you know, we are, like, meatbags beat up of matter that could, that could do a million times
more computation than what we're doing, and, uh, you know, AI's will notice that, and, uh, at some point,
like, the, the idea that they'll, like, make, can stay aligned and resist that pressure forever,
is, you know, it feels like a risk. And also, kind of, the deeper thing is that I think, like,
part of being human is, uh, having a life that has meaning. And I think part of having a life
that has meaning is being able to take actions that have actual consequences in the world. And so,
if all of us can have, like, uh, lives of maximum comfort, regardless of, like, what I do, like,
I would feel empty, right? And I think a lot of people, uh, feel that way, right? And so, like,
I hope that we figure out, uh, like, human AI augmentation, and, like, what that works like,
you know, it does, uh, like, ultimately, yeah, you know, it does, does that lead to the same path
as uploading, um, like, you know, this is, uh, I think that we need to figure out, like, there's, uh,
you know, there's a, uh, a possible world where some people choose to remain more normal. And I
think everyone should have that, right? It's possible, even that, uh, like, Earth should, uh,
remain as, uh, as the planet for people who, uh, uh, who take that option, right? And, uh,
we basically figure out something that, like, we can all participate in, and that continues to
be pluralistic and that continues to have, uh, the kinds of, uh, cultures that, like, we, uh,
you know, like, all of the, uh, and, and actions and lives that we today would find
honorable, right? And I think the, uh, you know, the downside world is basically a world where,
for any, uh, reason, all of that goes, uh, goes off the rails and is prevented from happening.
Um, yeah, I think, I think, like, the downside world in 10 years would be, we suffered from,
over-centralization of AI power. We have mode collapse in terms of our medics or cultures that are
allowed, what you're allowed to think in terms of a space of technology, essentially entropy collapse
in terms of every parameter space. So you're saying you're worried that instead of climbing the
Kardashian scale, we'll climb the Kardashian scale. Oh, nice. You're waiting for that all day.
Well, exactly. Like, I think, I think your point on, like, the hedonics singularity as being a,
a risk, you know, people would just, like, you know, even if you have neural links or ARVR,
people could just, like, you know, goon forever in some room and, and, like, just maximizing
pleasure. And that's, like, a local optimum for your brain. And that's something we want to,
want to avoid. I think, um, I think optimistically in 10 years, uh, we have extremely powerful AI
that's extremely helpful to us. We have personalized AI compute that's an extension of our cognition
that we control and own. And, and we, the truly is an extension of ourself. It's just another
part of your brain, right? And it has always on perception. It's a not, you know, it can,
it's seeing here is everything you see in here. And you can talk to it. It's just, like,
right and left hemisphere. I think that's the, the soft merge. I think in 10 years,
neural link like technology is going to start really emerging. Some people are going to choose to,
to adopt it. Uh, yeah, I do think most companies are going to be extremely hybrid. Most,
mostly AI, some humans are going to be far more companies. We're going to do far more. We're
going to produce far more value. We're going to do much harder things. There's a lot of hard things
out there that we, like, mentally walled off. We can't do those. Oh, yeah. Yeah. Oh,
terraforming Mars. Too hard. Yeah. No, not, not doable. But with more intelligence, we'll,
we'll be able to do that. Not in 10 year on 100 year time scale. Possibly. Yes. I think,
I think in 10 years, it's going to be a huge bunch of biological breakthroughs. You know,
peptides are kind of like an interesting new area. There's, I mean, there's a whole,
there's a whole floor here on, on next gen biotech. You should go talk to them. I, I think,
you know, optimistically, we see the cost of making discoveries and biology going down the opposite
of e-rooms law, right? The e-rooms laws like Moore's Law and Reverse for Biology, the cost of
any discovery there going up exponentially. And so, so to me, I think, I think naturally,
you know, white collar work is like distilling a human brain. We're getting there. The next frontier
of complexity is biology. The next frontier after that is material science. And so, I think the
next frontier is going to be AI helping us live longer healthier lives. On 100 and billion year
time scale, it's going to steer our evolution, right? I am very bullish on the biological substrate,
despite what people think. I do think Silicon has some advantages, but biology is amazing. We're
like, it's the self-assembling, self-organizing piece of matter. We just like, you know, inject a bit of
code and then there you spawn and then you can flexify over time and you, you are a biological
general intelligence. Do you think it's possible to get biological intelligence like us to think
at 10,000 tokens a second? Potentially. Yeah. I mean, at the same time, like, you know, you can
hyperdise several models. You can have pipeline parallels in between your brain and AI. You can
be kind of the slow thinking mode, right? Like, right now we are the slow thinking modes in late
and space and vibe space. That's what vibe coding is. And then the decon evolution is like the AI.
I think that's a nice sort of time scale separation of the, you know, there's a hierarchy of
intelligence and we can be part of a system just like, you know, mitochondria or part of a cell.
We can, our brains are going to be part of the super, you know, super intelligent system that
is U plus your personal AI. I think that's the good future. I think in 100 years it's going to be
everyone is going to be soft merged like that. And in billion years, our biology is going to have,
you know, evolved quite a bit. We might be biosynthetic hybrids. We're definitely going to have
terraformed Mars, several planets, maybe access to other stars. I think on 100-year time scale,
we're definitely going to have most AI is going to be in the distance warm around the sun because
that's the source of energy. Elon knows that. He's all in on that vision and accelerating that
timeline. It relieves a lot of stress for energy and footprint on Earth. So it's a natural way
forward. But if we have extremely cheap intelligence, we're going to be able to one shot any problem we
have in our lives, right? Like, oh, I have a bug. Solve. Oh, I have this health problem. Solve.
What else do you want? That's amazing. That's amazingly good. Like, we're going to have more of
that cheaper. And we just got to make sure everyone has access to it. And no one convinces you,
you shouldn't have access to it and centralizes it because that's the dark future. So that's what
it's all about. Hopefully this discussion today, you know, got people thinking. Yeah. I noticed
something like a powerful theme between you, which is like Vitalik, you're arguing for enabling
polarality, I would say. And you would say almost the same thing of maximizing variance. So to
speak. And that seems to be like the central through line of where we're going. And like, the top
down from where a lot of the other views come from. I love that. So we've been doing this for a while.
It's been amazing. I think we're going to have to wrap it up. I want to leave this just on like,
you know, this has been for us. But I would love if you guys continue to have a conversation after
this, you're obviously connected now. What is something that you would each like to kind of leave
for each other? And for us, obviously, but really for each other, like walking away from this,
kind of chewing on thinking about as we, as we leave this place. Unfortunately, yeah, if I actually
had one, I actually would have loved to just give you one of the, the cat as a gift. It's the
the air quality monitor that does cryptography. I think it's a super cool device. But how about I
will give it to you metaphorically and it's an IOU and potentially will have a much better
device thing like this and maybe even something that can, you know, like, outcompete fitbit watches
and do amazing things for your health and do it all privately. And you will get it quite soon.
Yeah, we'll keep chatting. I think I think I want to artificial life pill you, you know,
artificial life on the network. I think it could be, it could definitely drive the cost of
intelligence down. It could be an economy, you know, we've outsourced manufacturing to China.
I allowed us to go to higher levels, you know, different types of jobs that are more comfortable,
higher leverage. Maybe a lot of cognitive work where the good outsource to the swarm of the eyes.
Eventually, that's going to, it's going to live on the distance swarm and so on.
I think there's a unique option right now. Crypto is going to be the coupling between AI and humans.
I truly believe that. How else do you get to build trust between
species, right? And I think we need to start thinking about that, like, really thoughtfully. So,
so maybe that's, we're going to keep chatting about that. Awesome. Don't cry,
well, thank you guys so much for, thank you very much.
a16z crypto show
