Loading...
Loading...

This new installment of the Worthy Successor series is an interview with Ben Goertzel, founder of SingularityNET and one of the earliest and most persistent thinkers in artificial general intelligence, with decades of work spanning AI architectures, decentralized systems, and philosophical perspectives on intelligence.
We talk about how early exposure to science fiction, philosophy, and psychedelics shaped Ben’s cosmic orientation toward intelligence. We examine his vision for a decentralized “primordial soup” of AGI systems, and why he believes cooperation among diverse intelligences may outcompete centralized, adversarial models.
The interview is our twenty-sixth installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This episode referred to the following other essays and resources:
-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy
-- Robin Hanson – Adapt or Vanish: The Future Won’t Wait for Us (Worthy Successor, Episode 16): https://danfaggella.com/hanson1/
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/b6q3ppo4D4w
See the full article from this episode: https://danfaggella.com/goertzel2
...
About The Trajectory:
AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?
From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.
Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle
Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai
This is Daniel Fajelli, you're tuned into the trajectory, and this is episode 26 of our
Worthy Successor series. This episode is with none other than Ben Gertzel himself. Ben
was with us in our first ever series, which kicked off with Joshua Bengeo a year and a half ago,
or something along those lines. And it would not have been a complete Worthy Successor series
without Ben. Worthy Successor is where we discuss the kinds of intelligence we would hope to
blossom into the galaxy even beyond humanity when we transform into something else or no longer
here. And Ben has thought about that greater spire of forms, that greater trajectory of life
for decades longer than most people. We go into the origin of his cosmic thinking,
sort of where it comes from, and what he specifically would be hoping for and is indeed working
towards in the construction of a Worthy Successor and where the role of humanity might be in that mix.
Plenty of interesting ideas, so much unpacked with Ben as always. I'm going to save my ideas
for the end of this episode. So stick around for those. Without further ado, let's fly in. This is
none other than Ben Gertzel here in the trajectory. So Ben, welcome back. It's great to have you here.
Good to be here. This is the Ben Gertzel topic as far as I'm concerned. Diving into Worthy Successor
from your perspective is going to be a ton of fun. We've got some interesting audience questions
and some fun things to poke into. Before we even get into it, I want to get a take on what
oriented you towards the big picture in the first place. Because since maybe a decade or more
or multiple decades before you and I ever chatted, you've always thought about intelligence as kind
of an unraveling cosmic process, which could lead to a net boon. And you saw traits and qualities
that were maybe more than we are that could carry on. Maybe some things we don't even understand.
This has been your terminal perspective for as long as I've read a lick of what you've produced.
What do you think tilted you in that direction? I want to open with this.
Oh, I think I started in that direction from as long as I can remember. I mean, it was
science fiction before science fact or realistic futurism that got me thinking in this way,
right? I mean, I started reading that was two years old or something. But the time I was four,
I was reading science fiction novels already get my hands on. And you know, the SF from the
1950s and 60s explored every possible scenario you could imagine of you.
Superman, spanning galaxies in multiple universes, technological singularities, you know,
mind clutches of people's brains wired together. I mean, it was it was all there in science fiction.
And then I guess in early elementary years, I did more reading that kind of connected the
size fiction with the reality, which came in two directions. So one, as I might have mentioned to you
before around 1973, I was seven or eight years old. I found a book. I think it was actually a
used bookstore in your my house and hadn't failed New Jersey called the the Prometheus project by
Gerald Feinberg, who's a physicist from Princeton. And he said, within decades, we're going to have
machines smarter than people. We'll be able to do that in engineering and we'll conquer aging
and death. The question will be, do we use this for contrastics expansion or worthless rampant
consumerism? And he thought we should put that to a global vote of all systems on the planet using
a UN funded computerized voting system. So I read this in 73, but that was seven or eight years old
or something. And I'm like, this makes a lot more sense than most of what people are telling
me about life, the universe and everything. I tried to socialize it with my parents or my
grandfather who was a physicist for kids of school. And they pretty much were buying it, but it did,
it did stick in my head. And so I mean, this, this was not a crackpot book, right? This would
buy a physicist from Princeton University. It lay things out pretty carefully. So these ideas
about the future were out there in the that book was published in 68 before I knew how to read,
right? So I mean, these ideas were out there laid out in a nicely systematic way. I later met
Valentin Churchin, the founder of Russian AI. He published about the phenomenon of science with
the very similar flavor in the late 60s, early 70s in in Soviet Union, right? So I mean, these
these ideas were were there for those who cared to think about it, right? I think what very few
people tuned in probably can sympathize or share an experience of is genuine cosmic intellectual
exploration at the age of eight. I think I may not only speak for myself in terms of understanding
very little of almost anything other than like dinosaurs and and whatever, but, but either way,
I mean, many people will though resonate with reading science fiction and watching science
fiction and even ravenously consuming it. What I'll say though from my vantage point is that
I talk to a lot of folks who've read the science fiction, maybe even also read the science. People
pretty deep in in artificial intelligence, maybe for a fistful of decades, including some who
you know by name, who have been on the show, who I think maybe they saw science fiction as
entertaining or maybe they saw science fiction as still a sort of human locus as kind of main
character and extensions and tools mostly. So like it was exploratory in thinking, but it was
really very much like there is there is a a locus of both moral value and volition until the end
of the universe and it's obviously us and then some other stuff happens. So some people walk
away with that. You walked away with an ecology of expanding intelligences that maybe could collaborate
in new ways and you thought of how things could go right and wrong. You had a bigger picture than
most people walk away. I think another thing in early childhood was my my mother and that times
doing gradual work in Chinese history. So she had all these books explaining Buddhism and Taoism
and these different Asian philosophy things. So I read through these around the same age in elementary
school and like this is quite interesting and that got me started trying to meditate. Then I
discovered in middle school I discovered Uzbensky Russian mystic who was a yes yes yes yes
that the student of a Gertjev right and yes really cool stuff which is supposed to be my last name.
I should be Ben Gertjev actually but really whoa my grandfather Uzbensky stuff is very cosmic if
you think about it like rationally my great my great grandfather was Samuel gets a
love each Gertjev and then when he migrated to the U.S. and Ellis islands they truncated the
Gertjev to Gertjev. Oh man I wish I wish my great grandfather did the same but anyway anyway I
got stuck with this yeah first reading this Buddhist stuff then reading Uzbensky I mean that
that gives a quite broad view of the universe in which human mind is just a little spec in a vast
cosmic web of of intelligence and I suppose it was natural for me to connect that
with various things from science fiction but that again that doesn't take that much information
like Arthur C. Clark chuck child to his end right I mean there's there's a lot of science fiction
connecting advanced technology with cosmic wisdom like I remember the movie 2001
from when I was a small child we saw that somewhere I mean that's you know you've got hints of
transhuman intelligence right there and you've been the whole arc of human evolution
that's just part of some consciousness explosion right a bigger brother story so I mean oh
all these ideas are more current now but it's not it's not like I was growing up
in a tribe in the Amazon but there was no this information available right like it's all
it's all there in the library it's even in the movie theater it hit you more I would say
profoundly because you didn't take it in the very banal naive sort of eternally hominid led way
that I think most people do my commonality you know being maybe one of the most
cosmically bent persons who I know um the commonality that I see about people who actually
consider the pro the great process of life in addition to humans who sort of see see it as a moving
thing they they there's three commonalities that I found one is um they've left a strict religious
background so they've had they've they were born in a box they broke the edges of that box and
now they don't know where the edges are and so they and they actually find that there aren't any
and they have to make sense of that those people are actually quite reliably consider worthy success
or stuff another is psychedelics for better or for worse like cracking open trade openness and
breaking those boundaries also very very very highly correlated with visions of
and ecology of expanding intelligence that we don't necessarily play a main character role in
a third is um people who study kind of uh big history or uh the origins of life or kind of
astrobiological speculation stuff like Martin Reese for example who was has very very cosmic
perspectives those three things strike me as like people that have broken out science fiction
almost never does it were there any other factors than just having really really really
in parents and grandparents and a lot of science fiction like to any of those three click with you
are there other factors you see maybe not just among yourself but people who think cosmically
like you do I wouldn't say my parents thought cosmically exactly but my my father was certainly
a broad scope historical thinker such as you mentioned like I I mean we had that
was it will geront like history of civilization in the house right so I mean I mean I did
I read through the history of eastern and western civilization and yeah my dad had his views on
the rise and follow different civilizations through history every he's you met Ted my father he's
a sociology professor right yeah he's a ruckers and he you co-wrote a book with him that I own
I forget that has the egg on the front right that's right he was he edited that book right
alongside you so I yeah we're working on a new book now which we may come to later in this oh
nice so he's he's still collaborating with you that's why yeah yeah he's 82 but he's he's still
going strong but Ted definitely he was a big picture thinker on human history and he I mean
he had been a Marxist originally and Marxism is if is really a big picture grand unified theory
but then well I was a child he gave up on Marxism after sort of observing that the Soviet
Union was not the utopia he'd been told it was right and and but he still didn't give up the
interest in like understanding how everything the word to that I would say my my grandfather Leo's
well who was a physical chemist who worked with Linus Pauling and Fermi in these guys although he
was he was sort of a lab tech plus plus he wasn't a theorist but he was he was certainly very
interested in in the big picture and more more of a a science review I would say none of these
influences were nearly as crazy as I am like I'm not by far the case this one you're more
different than they are yes and by far the craziest one in my biological orbit yeah yeah but and
even even my sister Sennaki not remarkably normal and well adjusted in spite of having me for
for an older brother but there there certainly were big picture thinkers in my in my orbit but
it was it was more just about reading all the time ever ever and science I mean I I read every
science book in our town library right and everything in the local bookstore then when I was like
13 I discovered the Rutgers University library where my which is where my dad work I'm like and
then then then you had I will never I will never finish reading all this stuff no no yeah
ain't gonna get it all yeah psychedelics you mentioned I mean I started yes been I started with
acid at 15 I guess and was whoa whoa I think seems early now at the time uh I know you know I
was it was at what the 70s right so it's not super 80s I was 80s 80s okay I think before that
even before getting into psychedelics I was 12 or 13 I found the book by
John Lilly it was like programming in meta programming in the human organism or something right so
I mean but it was it was all about how psychedelics could reprogram and deprogram your your your
mind did I I so I was familiar with the concept it seemed quite interesting then I went to
university at 15 and in that setting there were second out it's uh all around that we had acid
and mescaline mushrooms blah blah blah and that was a whole interesting exploration I would say
we would I would have been better off if they'd have like a shaman around the university to
as opposed to just being lost in that world right yeah well you get profound moments of insight
and understanding then a whole lot of hallucination and tangle and and so forth and then after
a number of years have taken psychedelics off and on I sort of integrated that sort of state
and understanding intimate every day world view then then it was it was much later in my
when I tried DMT in ayahuasca which are good different substances in these because these give you
more sense you're contacting these other minds out there at acid it's like your own unconscious
and then because we mind of God but DMT in ayahuasca there's like these specific other
super beings and you have to decide after you come down like do I take that remotely seriously or
is that you know purely a fantasy caught up by my brain and that that was one of the experiences
that pushed me to become more avowedly paraconsisted in my hood my like I don't need to decide if it's
true or not like there's nothing forcing me to it's okay if I have my brain phase it's real
have my brain things it doesn't that's like it always at Anderson right a foolish
consistency is the hard goblin of simple minds right so yeah we don't we don't need the
we don't we don't need to collapse all these things to decision prematurely we certainly don't
oh well reading Emerson was probably my most catalyzing gateway drug into this space frankly
of cosmic American thinkers I I rank him probably unfairly high but so but okay so good to
understand where psychedelics fit in for you two really quick questions on this before we get
into where they success or one on the on the psychedelic side and then we'll go into the other
traits that tilt people cosmic you are already clearly on this bench you had a vision for the
unfolding process of intelligence so it's not like psychedelics gave you this idea but was there
something about those college years you were very young in college was there something about
those college years where you felt there were any insights gleaned from it well like well Dan
I felt like there was a I'm being cliche here I felt like there was more of a oneness of life and
we would see that with non-biological a psychedelics or well then you know I really saw more of this
kind of possibility is actually more worthwhile after that experience that oneness and that
oceanic experience organically as a small child I mean I didn't I mean I remember
well they found out the death was a thing I was like one and a half years old I was quite upset like
some Greg grandmother I didn't even know very well died everyone was all upset and I kept asking
my dad like what are you talking about and he finally he used it and I was he just something like
dead cat we'd seen somewhere you know grandma's like that right then he had but he got through
to me right like I understood and he I found out later tended been protesting death on the
campus of any out college in like 1963 or four something it was slammed the student league
against mortality so if you go out with that picket signs in march against death you can get society
to abolish it so I mean yeah that then a few years after that when I got into Buddhism I took a
different view of that I mean you I Hinduism you and reincarnation and I remember sort of forming
a picture in my mind of like what would it be like to not have a body and not have a self
but just be sort of floating there diffusing into everything and it made sense like I could sink
into that feeling by by quieting my mind it didn't it didn't get rid of my desire to have my
biological body keep rolling but yet I could see there once you peel away the attachment to
the self and body like there is an awareness there but then I mean taking psychedelics
it puts you like in two hours edit reliably with like an intensive oceanic experience that you
get into only occasionally through meditation unless you're like a super zen master so
like I had I had found those states before while meditating but not like full on reliably
lasting for hours yeah yeah yeah so okay Buddhism was part of the influencing as well I think it's
good for people to just get a sense of like what has oriented you because you're thinking on this
has been for many many a decade and now we finally get to poke into some of the some of the other
robust questions here that I want to dive into so the way I I frame the worthy successor question
you can take this wherever you want to go having read a bunch of your work I have like some prompts
from your previous work and I have some thoughts of what you're saying but I'm not going to see
you with anything I'm just going to ask you the same way I have Michael Evan Peter singer whoever
else the way we imagine the question is if you can look down imagine that you're that diffuse
intelligence you just talked about yeah you're just that awareness you look down maybe it's a
10,000 years maybe a million years in the future there isn't necessarily anything exactly
homo sapiens shaped anymore and there's no English being spoken but when you look down the
complexity and the future of what life has become for you really feels like a win like you look
down at you say this went really well this is flourishing like this is what I would have hoped
would have happened with life you know a million years forward 10,000 years forward what would
that scene have in common like what would you have to observe to feel like this was flourishing
I mean if what is around a million years from now is even comprehensible in its full richness
to my human mind then that would be a problem I presume I presume yes yeah there we go I used to
yeah yeah I'm totally with you so lay that on us because I completely agree give us an idea as
to why that is crucial for you to be it flourishing genuinely I don't think that what my 2026
human brain hopes will be there in a million years matter as much that to me that that's like
asking an amoeba or a rat what it wants to see on earth a million years in the future and
I mean of course it can't see it right so I mean I would I would expect to see things
going way way beyond what exists now or what we can understand but I also don't even think
I can define the dimensions in which that growth will happen like what would it be more complex will it
be more diverse you know what will it be more intense in experience probably but those
dimensions are probably overly human constricted ways of phrasing things well look I so
here's an interesting thing I'll just mention for you and we're gonna we're gonna dive into
exactly what you said so I happen to share your sentiment even if I didn't we'd explore it but
um uh for example like Ed Boyden has a very similar ish opinion there's a couple other people
that do a martin res basically said yeah if I could look down I would want it to be as richly
vast and complex to me now as rich human civilization is to nematodes today uh and you know he the
second thing he said is and I guess I would hope that it be sentient now I think he would admit as
you just have that our conception of sentience is some tiny feathered fragment of the total
state space of experience and of course he's not saying it'd be sentient in some limited hominid way
but he seems to think that if let's say some non-biological intelligence was able to run the show
and for some reason experience just wasn't part of it for some weird uh reason that there may be a
kind of lost there even if other kinds of value are to blossom I mean I'm
more pan-psychist than you know me so I kind of think experiences ambient in existence on the
other hand if sure if reflective consciousness with self and the feeling of will like if that
went away I was replaced just by plants and lichens all across the universe like okay that would
seem like a loss on the other hand if human like reflective consciousness is gone but you have some
ever amazingly complex you know nonlinear distributed resonance mode which isn't like a self
model human reflective consciousness I mean that's all right like like we can
what I would you know if if a super AI came here and tried to eat my wife and children and
turned them into hard drive I would kill the super AI right just as a as a human reaction you got
yeah you got responses here of course but I mean over thousands of years all that time yes
yes over all that time before species fades that's all right on the other hand if it doesn't
that's also interesting right I mean like we still got cartridges we still got amoeba
or shoe crabs and everything yeah yeah yeah so I mean I would I would be nostalgically happy to see
humans still rolling along and like you know toy poodles still running along in a barking and
parrot singing up in the trees and so on I would say like if I had to choose between human civilization
Roman kind of azuz for a million years versus something massively transhuman forming I would
choose the massively transhuman amazing thing I don't think that dichotomy is really a necessary
window because there's lots of resources in the universe well I think what you're getting at
and then we're gonna we'll zoom down from the hyper abstract so your conception of the good as it
extends that far out as literally uh involving magazines of power and experience and access to nature
beyond what we could experience or even talk about is what I suspect to be as close to an approximation
of a right answer as as I could imagine I agree with it monumentally but you also mentioned you
know if if such an entity was trying to convert your your kids into hard drives you of course defend
yourself I do suspect as you had said that probably not very probably not very successful now very
well yes you wouldn't you wouldn't you be you try I go to try yeah but uh but I think you know
it seems to me that the bubbling to those higher forms will in some way be driven by the
self-interest and behoving of the individual entities at our level now our little scale now like
in other words as we try to get more a pleasure or experience or creativity we will level ourselves
up and sort of eventually you know our embers will turn into that flame at some point simply by our
own drive you're drive to defend your family event I mean it's unclear how much the details of
our orientation right now affects the the grand trajectory yeah it's absolutely if you look
back at look back at something like the emergence of civilization and you know sustained
large scale agriculture out of earlier mainly hunter gatherer type societies I mean
some hunter gatherer tribes were very violent and militaristic some of them were very
chill and hippie like it others somewhere mat matriarchal right and I know how much it mattered
which of these tribes made the first moves toward broad agricultural civilization right it may
not it made a difference in the transitional period right but it I mean in the end whether it was
you know a militaristic tribe or a hippie like tribe or matriarchal like eventually there were
many many cycles of spread of agriculture and it probably didn't matter the flavor of culture that
started it I mean it matter it matter then though right it matter then because is it a bunch of
killers there is a piece of people who became systematized we got more powerful first so we
met her in that transition period but I think I would argue in the case of that transition it
didn't matter much and probably the same with like evolution of language like was the evolution
of language initially biased towards saying nice stuff for nasty stuff right like we don't know
I mean maybe and hey let's get that manneth over there it may have been get your hands off my
girlfriend we don't know what it was right that even so by the time you have recursive phrase
structure grammar and literature like I don't think these big evolutions that we've seen in our
history with that sensitive to this sort of moral valence or particular characters of the origins
seem like in these cases there's just some structures of civilization or language that
were probably going to emerge one way or the other but but again we don't we don't know if that's
the case in it in this coming situation well I think it it is honest to say that we don't yeah we
don't know if there sort of is a way things bubble up you know it seems almost somewhat arbitrary
that carbon-based life did this that or the other but it may be the case that the body plans we
see on earth and the early forms of life are sort of like the only way they can happen and also
that our volition may actually not be that influential but either either way your big picture is now
clear we can talk about the somewhat more accessible trajectory setting stuff so let's pretend Ben
and neither of us are sure let's pretend that some minute amount of influence of the trajectory
might be had by living hominids at least in this interim period I mean I think we should hope so
you're certainly working hard to do so I think it can make a big big difference in the in the
interim period yeah and I mean how then whether that's a two year or a 30 year period is up to
a bunch of contingent factors but clearly like between the first barely human level AGI and the
first true superintelligence like in that period the amount of suffering in man we'll see probably
depends a lot on like who gets to AGI first and how they roll out I think there's a lot to be
said of that and and and and so given that scenario and our presumption that our volition be real
at least for the sake of this discussion you know what what do you hope to see manifest in
the intelligences that come out as sort of the first wave here you've done some writing
uh uh around kind of almost I don't think you use the word seed values but sort of like initial
uh kind of points that that um you know I think you mentioned growth freedom joy in in
Cosmos manifesto you may have updated some of your thinking so when you think about the next stage
of like the the traits that are most important to build to have that cosmic boon in the long term
what do you see as the things to orient ourselves towards and why so one
one interesting pathway or a family of pathways I see is where you have like a society of AGI
minds you have a whole bunch of different AGI's popping up around the world and they're
interacting with each other and I think if you have that then there's some sort of socio-cognitive
dynamics that come into play so I I took time to write down in some detail a while ago
an argument why on the whole good guys probably usually will win you know what I mean by that is
if you have a collective of entities who trust each other and are willing to cooperate
together then they can be more effective than a collection of equally powerful entities that
mistrust each other and are always scheming against each other and this
this is what you see in Hollywood movies a good guys can cooperate together the bad guys are
backstabbing and so on and it's also what you see like in crypto Bitcoin and Ethereum and so
on they're struggling because you have to protect against mistrust everywhere right and if you
if you assume you have trust in the network then every transaction can be done just much more
or two please not having to encrypt and decrypt everything in so many different ways so I mean I think
so if statistically pro-sociality cooperation tend to win over sort of selfish bad biting and so
forth I mean then you know what you need to avoid against is pathological cases like okay
two powerful AGI's you were just spying with each other and I mean then
could be yeah I mean then then it doesn't matter that statistically on the whole over most
ensembles pro-social will be anti anti social and selfish right so if we can get there so there
is like a flourishing open decentralized community of AGI minds that are banding together to form
collective minds in a fairly flexible way I think then you have some evolutionary dynamics
that pushed toward some balance of being first social and being and being selfish right and
and that David Brynn who has has written a fair bit about this from a different perspective but I
mean he also thinks the checks and balances of having a society of minds are going to be very very
important and I would say it's not necessarily with the global great powers are working toward
right now it's certainly not what China is working toward it's not what the US is working toward
either with the I mean the US government project genesis gen ai dot mill I mean it's all Eric Schmidt
it's all Google Google is not terrible they're not cyber killers but they are a monolith right I'm
putting that model if together with US government versus Chinese government I mean it's it's not
a statistical evolving teaming ensemble of cooperating agents and that that brings us to the
decentralized AI theme that I've harmed on so much right so that then the question that you have is
okay on the one hand you have a centralized path which looks like
a few big companies were together with a few or two big governments right on the other hand
you have the path evoked by Linux or the internet which are owned to control by everyone and
no one and are sort of let the thousand flowers bloom and cross pollinate right and so then
those two paths are there you know there are risks to the decentralized path like crazies can
take the decentralized network and do something with it just like crazies can hack you on the on
the internet from their Linux OS right but there's a lot of potential for good that comes from
having an open framework where you can have a flourishing society of different approaches
co-evolving with with humans right and that so that's one of the two big projects I've
been working on the last few years right I mean one is Hikeron which is a new
AGI platform bringing deep neural nets together with logic systems evolutionary algorithms
in various other AIDNATH and store the real cognitive AGI architecture that can generalize
self-understand and self-modified the other is a decentralized infrastructure that lets you run
this fiscal AI system on a network of machines distributed all over with no central owner or
controller right and our first version of that was singularity net now we're building so
they go the ASI chain which is is more technically refund but I think if something like that
is rolled out that that can be a better kind of primordial soup for the emergence of
of beneficial ASI it's a good way to frame it actually primordial soup because the way that you
worded it there and you're you're certainly not the only person that's made the nature and ecology
analogy I think it's a good one but no one's really said primordial soup you're you're you're being
very frank about how it's still the early days even then um but that it's it's it's it's more about
you know the environment from which such a thing could blossom and embrace about what but we're
in an exponential growth phase right yeah yeah uh and so so the early primordial soup so we're
talking about yeah a society of sort of um and an ecology of different intelligences
decentralized not in in one sort of rigid place that maybe could be hacked or could conquer
everything or would have you to sort of set ourselves up for more of the kind of blossoming and
blooming we've seen in the natural world um do you uh like would you describe that as more let's
call it bottoms up then tops down would you call it something different than that when you think
about the decentralized sort of uh backbone of this kind of technology like like how how is bottoms
up the right word for you or something different singularity in it and ASI chain platforms
are definitely bottom up right now that the hyperon agi system that we're building
is a mix of bottom up and and top down right because we're having we have we have an explicit goal
system there we have a dynamic where the system tries to reason about what actions will achieve
what goals and what context but we also have a lot of bottom up pattern recognition and
attention allocation and and so forth and and these are separate layers right so I mean we could
succeed with ASI chain even if hyperon doesn't work and then someone else builds a whole bunch of
different smart AI's on top of of of the ASI chain now there is some interesting tech commonality
there in that we've made up a new programming language called meta any tta which which which we
believe is especially good for programming agi now that that is both the smart contract language
of the ASI chain decentralized framework and it's the language of thought of the hyperon AI
system so we are we are reusing software components among the two levels
but they are independent and I would be insanely happy if people built beneficial agi using some
new algorithm I never even imagined ran it on top of our ASI chain infrastructure
you know if that gets smarter than our hyperon system and it's doing good things that cool right
so I mean you know I would say I'm I'm sort of playing it both ways that we're building a platform
anyone could put an AI on and run in a decentralized network but we're also building a quite
specific agi architecture which is designed to run very nicely on this decentralized network
but everything's everything's could run nicely there too it should be noted though like
a large back propagation train transformer neural net will not run this in this kind of decentralized
network because of the way back propagation works are training GBT-5 or something because back
propagation the main learning algorithm used from neural nets today that needs to train the
whole neural network at a bash process sweeping through the whole network and that means you need a
bunch of machines very tightly coupled together and you're in your distributed network they can't
be on different places with it with kind of sluggish connections between them does that pose you
know when you think about so I'm going to do a couple questions here around this blockchain decentralized
deal and then talk a little bit more about the other traits and values you'd want to see
but there's a lot's unpacking what you just said so one one of them is um yeah like is there
you know some some people are a little bit wary that okay maybe LLMs is not the totality of agi
and I think very few very few people make the argument in good faith that actually it is but many
people do I think actually in good faith make the argument that the scaling stuff doesn't have like
a ceiling tomorrow and you know if these people are raising on godly sums of money we may very well
be able to break into things that are brash enough to start running the show even if even if it
it is on kind of LLM architectures is there almost awareness around that scenario from your
vantage point because that would sort of mean that the takeoff wasn't a fit for for the decentralized
thing or what's your perspective there the technology issues are solute and that because you can
get back propagation which is the algorithm normally is to train neural networks in the commercial
world now is not the only way to do it so for example there's an alternative way of training neural
nets called predictive coding and I've upgraded that to a very called causal coding but predictive
coding can also train neural networks and it rose fine on the decentralized network now no nobody has
done the work to scale up predictive coding to huge networks big tech has zero motivation to do that
because they like they like AI relying on the algorithm that needs huge server farms right so
I mean we have in the academic literature now we have ways to train neural nets that could
run fine on decentralized infrastructure it's just no nobody nobody wants to put into work to scale
them up really big so we have we have a team it's singularly net working on on that like
separate from our hyper on neurosabolic evolutionary AGI we have a team just working on
predictive coding for trading neural networks and on on scaling scaling that up bigger and bigger so
there there are there are technical ways to make LLM like algorithms run very nicely on decentralized
networks and and predictive coding is also better at continual learning so if you could get predictive
coding to scale up for training transformers then you would get rid of this thing where you freeze
the network like GBD 5.2 and then doesn't learn anymore until you get to 5.3 you would just have
continually learned so I think there are radically more AGI friendly architectures like hyper on
but there's also ways to just tweak at alums like keep the architecture replace the learning
algorithm that would improve them intelligence wise a bit and also they'd them decentralization
friendly right and one of the things we're seeing in the AI industry now is just lock in on
one way of doing things because it's worked right and I mean it's it's cool but the fact that
everyone is doing that same thing is ridiculous like we don't it's it's wonderful that we have
some great LLM's like I use them every day just like you and and and everyone they're awesome
on the other hand we don't actually need that many LLM's doing very similar things and it would be
better if more of the resource of the AI industry were exploring a greater variety of approaches
including like training in LLM with predictive coding you said a backdrop which isn't even that far
out there right well and I'm I'm I'm with you that I think there there are other viable pathways
here for sure and clearly you guys are exploring some there's others in the academic literature and
I think there are people who are really sold on the fact that we should be putting a lot more
resources into other pathways so you're not alone there at all I guess what I was getting at is
is there a version of the world where the money doesn't quite see that the momentum stays where it is
and we end up kind of breaking through to V1 of AGI in this LLM paradigm or do you think that's
impossible or very improbable maybe and maybe not impossible but quite improbable maybe for good reason
I think we're past that point now in the sense that like with our hardware on project
we spend the last few years building infrastructure but I mean now we have a very fast version of our
custom AGI language meta we have we have a beautiful scalable meta wrap database for storing
knowledge like we've got the tooling bill and what we have now is building and scaling up the AI
scripts within the tooling which is the kind of thing an open source community can can do I
mean not as fast as we can in singularity but so I mean I think partly because LLM's make
software development easier and faster it's now it doesn't necessarily take trillions of dollars
for people to explore and build a variety of alternative AGI approaches so that's one piece of my
so the other piece is I just don't think LLM's can actually get there to me that's just like
saying over and over but well if I put a bigger and bigger engine in my car they can't
make a race car that will go to the moon it's like no you will just get a faster and faster car
you will not get the car they could drive off the earth up to the moon right I mean LLM's are not
abstracting they're not generalizing much they're mostly returning mixes of what's in their
training database now I do think you can probably take over 90% of human work with LLM like
technology if you include you know language vision action always another LLM like neural nets.
That means something that means something but that's huge but but that's because the modern
economy has organized things so most jobs are regimented and involve repetition of things
they've been done many times before right and so then if you have a system that can
repeat what's been done before with judicious variations I mean that could do a great majority
of the things people are paid to do no that that doesn't mean you've cracked AGI in the sense that
I meant the term when I introduced it but it's still that in terms of investment though you could
say okay almost all investors would rather just invest in of course you're doing 90% of jobs
rather than creating AGI right totally well you know literally some of these companies are
defining AGI as can do whatever x percent of all human labor I mean it's like and and of course
you know if if we are to stay terrestrial which I think most people are quite terrestrial and
anthropocentric in their thinking then yeah that would be where the money would be at that would
be the thing if that's how we're defining AGI great now you're you're seeing it as this greater
cosmic unfolding process which I'm remarkably congenial with but I I guess I'm I'm wondering like
is that framing make it harder to get funding to that stuff because if if it's like look this
paradigm might be able to automate all jobs but if you want to do this cool cosmic stuff and actually
have it be part of a growing process I wonder how many people can we sell on that vision you know well
but the I don't know and I think you can sell more now but you could have even five years ago on
that vision on the other hand I think there's also a very competitive aspect to the economy that
you're not taking into account there because if if company A in obsolete human jobs but company
B can obsolete them at half the cost I mean then who wins right so I mean so just once the table
states are being able to obsolete human jobs then the competition becomes who could do the jobs
better faster cheaper and that that in the end we'll push you toward AGI because I can NLN
we'll just repeat what people have done with some improvements and so this is why the the
meme of super intelligence has started to spread right I mean EC billboards on super intelligence
in San Francisco airport now right so I mean it is silly but it's because people have realized
that not just an AGI has become overused as a buzzword but they've realized that
being as generally intelligent as a human it's not the end game right so if if company A has
AIs that can do every job as well as a human they can still be bankrupted by company B which can
do every job twice as well or twice as fast or twice as cheap as a human right so that either the
commercial or the military arms raised dynamic will immediately push you on toward toward AGI
and super intelligence and that that will probably be the primary driver of investment not not
the desire to explore the great cosmic future I'm completely with you so we we're getting at I
mean so again when I mentioned you defending your family but sort of like your own in that case
immediate kind of interest it is sort of like what takes us to the next step but if you do
enough next steps all of a sudden from a worm to a human being you know all you just you just
need enough steps and and with engineering layered on top of evolution you can proceed on the
next potential curve oh of course way faster than the star winning and stuff and yeah I think people
tuning into this show are robustly aware of that dynamic and and so with that said like
it sounds like all right LLMs might be able to take all jobs but then once they do new breakthroughs
are going to have to happen anyway because there's going to be vastly more competition still around
this new sort of thing right right but but then really you have to take into account that
you know as William Gibson said the future is here it's just not evenly distributed so what you're
going to find is some economic niches where AIs already can do people's jobs I mean they're
there you're going to have companies battling out over getting to superintelligence and working
toward AGI while at the same time and ever economic niches the war is just get AIs up to human level
so you don't actually have to have that LLM able to do 90% of human jobs to have an
arms race toward AGI and superintelligence right I mean you just have to have AI up to the human
level in some economically viable niches in some markets then those markets will drive the economic
competition toward toward AGI and we see that now in the military right and you see that in
finance and you see that in some other domains where where AIs are already in many measures
superhuman but that you need to get even more superhuman than than the other guys yeah well we're
going to get into that accelerating creative destructive wave stuff in just a second but knowing
that we're headed from you know that you've painted a picture of a decentralized ecology that
permits for a sort of primordial soup and maybe some Cambrian explosions that that could you know
lead us to the kind of blossom of intelligence into the the grander vision that you painted are
they're like harkening to your cosmic manifesto there are they're kind of seed values of sorts
or different kinds of attractors okay so it's decentralized okay so there's an ecology of them
I think people are down with that they I think people are getting it there's maybe different ways
to train in these big centralized ways people get that are there other north stars of initial
orientation that you feel like could be important for us to get to that longer term grander flourishing
like I don't know if we want to call them values or traits or anything else you know pointers
for you Ben as someone who's thought about this for so long I mean as you saw in my book a
causeless manifesto from the 16 years ago now I guess I mean I tried then to boil things down to
a few symbol to express values I mean what I came up with was joy
growth and choice then if I wanted a layer on a fourth value it was nostalgia right like
continuity with the past selfishly it would make sense for us to be like you're going to value
us humans you know like yeah I mean the way I arrived at that initially was pretty
unscientific but I mean joy seems like the foundation of everything that's cosmic bliss but then
it seems like just a universal unchanging cosmic orgasm somehow doesn't fit our human aesthetic
very well we like progressive growth and to see new things coming out along some axis then
that we're also we like being individuals who have some some sort of dynamic of agency rather than just
a big joyous growing mind field right so then on the other end we're also nostalgic with value
continuity with our past selves or even our ancestors so I mean you can you can layer on
more and more values that narrow things down closer and closer to the way humans humans look at things
no alias or yadkowski who we both know he made the good point a couple of decades ago that
human values are very particular and complex and they're not really fully summarized in any
abstract enunciation of axioms like that and I mean that's utterly true like we're all
we're discussed by incest right I mean that there's a lot of particular things that
that we otherly don't like that gross us out and that we think are wrong and I have that
personal gut feeling also but clearly not all these aspects of our human values
come from some abstract cosmic principle you could expect a super robot to believe like they come
out of our own history or reproductive system the way we evolve so the values seem they seem to be
generally what probably suited us right I mean like like discuss with incest and and you know
what smells good what smells bad etc etc I mean it seems to both be an emanation of the general
impulse of life to persist which involves growth it's it we seem to be part of a general
radiation of what life is doing but also a very particular radiation to the space that we occupy
and it would seem like the the value systems of vast intelligences like you're talking about
would probably want value systems that were adaptive for their circumstance like even like
loving your family right like in real life you know my wife or my little kids are like what
keeps me going day to day that that that is real human life right but on the other hand I mean
that's because of how we reproduce right I mean a super AI doesn't have to have a notion of a
family and it may not be important that a super AI is especially attached to whatever AI spawned
right like I mean what why is that necessarily because it may have spawned trillions of other
AI's right I mean it's like it's it's there may not be the some kind of emotional bond it's not
suckling at the teeth for you know years and years right yeah so I mean there's there's an
interesting balance here because we we want to pass some abstraction of our human values
along to our transhuman mind children yes but it makes no sense to pass all the details of our
current human values along to our transhuman mind children and how to strike this balance
there's no science for that which is why you had said you know you came up with them in a
non-sciencey way or whatever but I think I think it's we have to still wrestle with that and
and so your your values I think make sense the growth freedom joy initial I think of them as kind
of seed values ban or kind of initial initial north stars in the book you're actually quite frank
that eventually the orientations will blossom into things beyond your conception and that that
was one of the earliest moments where I realized that I would like to follow your thinking you
mentioned nostalgia let me just touch on that so Robin Hansen was on for a worthy success or
obviously you've chatted with a bajillion times and um Robbins a very unique thinker and one of
the reasons I like him um he said something that stuck with me I've never worded it this way I
thought it but I've never said it tightly he said if there is something you wish to persist into
this kind of evolving rolling future you should aim to have it be adaptive because if it isn't
it's unlikely to be part of the rolling future in other words if if you just hope for something
but it doesn't end up being adaptive in the ecology and emergence of what is occurring
then it may not be there you mentioned nostalgia it strikes me that maybe nostalgia serves
some or maybe you stew or does now serve some purpose for humans in or it could be a random fluke
like an a leftover organ right but but maybe it serves a purpose do you suspect there could be an
adaptive embedding of nostalgia into the intelligences to come that might even make the more robust
and resilient like have you thought about that I mean I think among humans it's definitely true
and when you go to post human superminds it's less clear like I think for humans
there are emergent patterns between us and our history and our lineage which unfold for us
in the course of our lives that we don't see at any given point right and I'm really gonna
all see that in our own lives I got you know I grew up secular Jewish right I mean not knowing
that my family was religious there was sort of Jewish culture and I greatly dislike this to
be honest as a kid just that I didn't believe in God I'm like why do we want this voodoo right then
I still don't like the religion aspect much like I don't believe we are the chosen people of God
or something but on the other hand as I get older I can't see like aspects of my psyche and
approach to the world they kind of resonate with things in in in the history of Jewish people and
this whole this whole lineage and so I think for us as humans our historical lineage has some
death that unfolds in different ways as we develop over our lives and there probably is some
adaptive value to it when you go superhuman it all becomes a little weird because like everything
that's ever happened in history according to our current knowledge of physics is embedded in the
quantum wave function of the universe right now so you're gonna say it would be accessible by
default if you had the right yeah yeah yeah exactly if you're smart enough you've got
if you're smart enough you've got all history and all future history like baked into the vibrations
of the subatomic particles so then like keeping the memory of what happened to be in your own
you know closest physical light cone is probably an atomism that doesn't matter once you're
at that point right yeah well so this is this this brings us to an area I've got to at least
playfully touch on you know your your vision of the future sounds like some things some things to
kind of note here you know we'll have initial seed values but we don't know what it values after
that there will consistently be new pressures for these systems to compete and cooperate with each
other in ways that at some point we won't be able to predict and you know we won't necessarily be
their peers in coordination by any means you're saying the good the good guy you may be you'll
just be an uploaded version of your current you right I mean you'll be you'll be 10 10 version 75
with an IQ of 400 trillion that that'd be pretty cool but I guess like we would have to it seems to
me that if everybody gets uploaded kind of to Hanson's point it would almost have to be adaptive
that that occur and it seems like there's a lot of ways for this stuff to rattle forth I didn't know
I mean how it how adapted is a rock you know what do you mean by adaptive I mean
no no well here here's here's what I mean by rots have survived a long time we don't know what what
what what Hanson means is in the unfolding cross of intelligence let's say somebody says I mean
I think Hanson used the example of uh he used one for him that he doesn't know if it will persist
and I actually think it will he was saying something like you know this is very robin Hanson oh so
all of these questions are Rorschach test when I answer them it's a Rorschach test when robin
answers the Rorschach test with none of us can escape Rorschachness that doesn't devalue his opinion by
the way I'm just saying this is very robin is he said um you know uh sort of open ended philosophical
debate of being able to put whatever on the table no matter how uncomfortable it is and kind of
hashing out ideas and I was like well it would seem like that would be actually productive to get
down to truth and he was like well if I can do it in a different way then you know I get to truth
in a different way then maybe we would lose that and so he was saying he would want to find a way
where that kind of exercise could be embedded in a way where it's useful in a pocket of the world
so that it you know when when the supermind evolves to version 70 nevermind seven point five
of that pocket still exists you know what I mean in that case you could make a nice
math argument that on the whole agents that can share the information and ideas freely
will be more efficient than the agents that that cannot know whether efficiency is what
matter is to the superminds is a different question though right like they they may have all the
stuff they want anyway and all the time in the world that they may not care about having more
more and more efficient communication they may care care more about maintaining peace of mind right
and that that that that's hard to predict but what I was thinking when I mentioned rocks was
I mean you don't have to be able to change and evolve to survive a long time either like rocks
or cockroaches or amoebas have not evolved much but they've survived a long time because they
just have very robust structural integrity it's like they're very very nice designs so in a way
they're adaptive but they're not adapting right that they're just designed in a very nice way
that can survive in a great variety of of circumstances I I don't especially feel like humans are
that way intuitively we seem like more of a mess than a a cockroach or a rocker or or or or
an amoeba but I'm just saying one way to survive a long time is to be able to keep pivoting and
adapting to what happens another way to survive a long time is like you know a perfect circle will
probably be around the long time this has got a lot of integrity and and simplicity to it is going
to keep popping up yeah well Yosha Baki mentioned you talked about what we are versus the granite
stone or whatever or versus a horseshoe crab I think Baki mentioned something I'll correct myself
that this is not the case in our show notes here but something like there's different sorts of
species you know some of them serve as some kind of a plotting a long thing for you know in cosmic
time still a flash but you know feels longer in our clock time here while other species are sort
of crescendos that sort of like really put you know put a wrench in the mix and make new things
happen and that we seem to be a little bit more of the latter you know we've only been around for
however many hundred thousand years yeah but engineering changes everything I mean so that is
interesting from an evolution point of view but once you can engineer things then almost
doesn't matter because you're into a very different dynamical regime right well you you engineer
for as long as you can until it's a primordial soup and then Lord know if what comes out of it right
I mean but if what comes out of it is something that embodies the seed values I've described and
we've been discussing then it may well prefer to maintain a nice little bubble where legacy
humans can remain and do their thing and that's just not this sort of dynamic that was accessible
to the dinosaurs or the or the trilobites or the megalodons in history right they couldn't
they couldn't engineer a successor that would have an interest in maintaining
them on ongoing labor as we we have the potential to to do that right so I mean there's a broader
evolutionary aspect to what's happening but it's not as simple a dynamic as the evolving ecology
that brought us to this point because I think engineering and
deliberately reflective thought do change the dynamics significantly I think that's hard to
argue with and if I if I try to sell myself on your side of the coin here I think it seems likely
I mean one would hope that this to me still feels hopeful but I'm gonna run with it it's that if
if there is a lot of hands-on deliberate initial building now unfortunately that right now there's
also a lot of military and economic arms racing which is not really bent towards those three values
you talked about it's not bent towards the open-endedness that I know you value tremendously as
as weaver does or the values you articulate it's sort of on growth I would say yeah I guess growth
in a but not necessarily in an open-ended intelligence way like if if if if if they're open if
they're a agi paradigm hit a terminus at all conceivable human jobs and stopped there was no more
blossoming of the process of life that would still kind of be a victory I think your idea of growth
is a constant unfolding of the set of powers and experience and nature which strikes me as quite
different so your your growth is more intentional in its ongoingness not just its ability to conquer
manufacturing and finance and so I I see your ideas as a bigger one there with that said so we're
we got a lot of these racing dynamics but we're our hands are on the wheel for a bit and it seems
like if something bigger is too blossom or we are to symbiotically be part of it in some interim period
probably we would do so in a way that behooves us in other words some thresholds would occur
in a way that behooves us and maybe this is where you disagree with you Kowski of course for him
sort of like as soon as it kicks off we're all fried immediately now I'm I'm not even that far
down down that camp I still have no idea why he thinks that and I know him quite well I mean
I see he has that gut fear in his bones yet he's also dedicated to rational thinking and I don't
know how he can convince himself of such an irrational certainty given what a rationalist he would
like to be I think yeah it I sit somewhere between both of you so I feel like I can sympathize
on both sides but but you know um your your sense is that as we mold it there is quite likely to
be sort of a good shake that we get out of it in that interim period because our hands are on it
and at some point you know it'll bubble into things and maybe we'll be part of that I don't know
if we have a guarantee of being part of that but it does feel like there's an interim period where
maybe we can secure a good shake as like the way I think about it almost been maybe you think
about it differently is maybe there's almost like a quadrant where it's like um bad for humanity
good for humanity bad for the process of life good for the process of life the top right would be
like a contribution quadrant and if it was bad for us but good for the process of life that would
be a sacrifice and if it was good for us but bad for the process of life that would be a freeloader
scenario so I think initially it strikes me that we would want some people say oh let's let's
build a eye so that we can or let's as soon as it happens we'll all just get free ice creams it
almost seems to me like the safest place to get a good shake would be in country contributor land
to sort of be part of the molding of it and becoming of it would be maybe best for us and it
in that early phase did you frame it differently in your own mind Ben I think that's the part of the
picture yeah and I would say what is weird to me about Ellie Ezra's view is just how confident he
is that like if anyone builds it we will all die like saying if anyone builds a GI we might all die
it's hard to argue hard to argue against that because there's a lot of I would agree with that yeah
yeah a lot of how you get at this certainty about what would happen just feels
weird to me because it seems like the most the most basic thing to understand about what happens when
you build something much smarter than yourself is you cannot nail down exactly what's gonna happen
right that's it I mean almost by definition like you're constructing a dynamic with complexity
beyond your own brain and whatever you think you nail down about what's gonna happen you probably
can't it's it's like when you're eight years old and you're like when I draw off I'm gonna do
this this and this this but I've been you're only eight yeah you don't know it's not totally
stupid but on the other hand your brain is just growing in different ways that you couldn't
foresee it that at that point so I mean I think we should have a healthy respect for the uncertainty
of what is unfolding and we could argue that out of risk a version we just shouldn't be doing
something so uncertain and then that's that's really an argument about risk tolerance and personal
preference right but on the other hand that's not a relevant argument now because
the arm for its dynamic is hurtling us towards powerful a ag ag ag so then the question
right now the question question isn't do we hurtle toward ag i and accept the risk or do we halt
progress because it's too risky the question is do we hurtle toward ag i in a militaristic slash
corporate way or do we do it in some different way and indeed I agree with your intuitive estimation
that if you have positive values you would like to see continue in some form if you engage these
values in the creation and interaction with the early stage ag i mean then would seem like that's
probably going to bias the odds in the direction that ag i will carry on some form of these values
right and that that's just it's kind of common sense it's like if you have values when
a pessimon to your kids well interact with your kids like do stuff together with your kids
that it bodies these values some of it rubs off on them and then it's not a guarantee but
it biases the odds in your in your favor now of course with you and kids there's a common
architectural grounding that isn't necessarily there but on the other hand there's a lot of
common architectural grounding with the ag is that we're building too like it's not we're not
building random intelligence yeah it's it's not it's not drinking in random data it's drinking in
you know and these are I mean a neural net hierarchical neural net as resembling this
inspiration here in the brain I mean evolutionary algorithm as resemblance to
neural Darwinism in the brain even a logic engine I mean this is modeled based on how we have
formalized our own deliberative logical thinking right I mean we're we're certainly building systems
in the human like vicinity rather than like random computational intelligence is and we don't
we don't have a science of like how how similar to humans does an agi system have to be for
sort of teaching values by doing together with that ag system for it to effectively rub off on the
agi system like we don't we don't have a science of that right now and I'm trying to build the
science of that for the family of agi systems that I'm working on with with hyperon and predictive
coding neural nets and on the on the other hand it's still not going to be a perfect science right
like I put a bunch of thought into mathematically analyzing this sort of thing and you can
you can do something I mean you can make some headway on these issues which can help
to tune all the parameters and design all the structures involved in an agi system so I mean
the idea that you want the agi to absorb your values when you interact with it this can be used
as design guidance for building the agi but it's not giving you a deterministic
certainty that this design guidance will work absolutely not well this this actually takes us
to a question that came up on Twitter and so before we get into the last question around
how to know if we're heading in the right direction or not I want to balance this off of you
um some people have liked or not liked um uh Claude's new constitution now you I know you have a
dog in the fight and obviously your position against or the traditional LLM players and whatnot
but traditionally you've been pretty civil towards those folks just conceptually how they're thinking
about constitution or maybe even um the contents thereof if you haven't engaged with it that's fine
you can just state that but somebody had asked and I thought it might be interesting to get your
take because you've thought about these terminal or not terminal values but these trajectory values
which will not be deterministic but will be initial you've thought about it for you know way before
these kind of constitutions existed do you have a take there on kind of how anthropics thinking
about constitution I have been strutanized Claude's constitution in great detail or I remember there
I remember there there earlier one I'm looking at it online right now however soon
no that's great that's great yeah oh this is actually fun we're just like poke into it a little
bit and get your thoughts this is a lot of fun I would say I've been very pleased by anthropics uh
interaction with the US government recently where I mean they just they're like well we're not
going to remove our models guardrails against violence to allow you to use our LLM to figure out
how to kill people right and then Pete Higgs at the secretary of state came out and said Boama
my new definition of a responsible AI is an AI that will help me commit acts of warfare whenever
I want to right then on the other hand you know the Trump administration's new AI initiative is
project genesis which was the name of Eric Schmidt's book right so and I mean if I mean if anthropic won't
do it Google will do it having removed don't be evil as their mandate alone a long time ago right
so it anthropic is genuinely trying to not be evil right and if I look at their high level values
they want clawed models to be broadly safe not undermining appropriate human mechanisms
they'll foresee AI during the current phase of development so very nicely worded and not ruling out
the AI undermining human mechanisms during a later phase of development when the AI is smarter right no
no number two broadly ethical having good personal values being honest avoiding actions that are
inappropriately dangerous they're harmful so as long as they're
appropriately dangerous and harmful then the then it's yeah I mean this is the challenge right
is that all of this stuff is so tough to tie down and it would it would be for you and I never
mind and genuinely helpful benefiting the operators and users it it interacts with right so I mean
yeah we can't really argue with those points then they they try to define what constitutes
genuine helpfulness so we want to help people when they're
immediate desires their their final goals their implicit standards and preferences we should
respect the user's autonomy will she pay attention to the user's well-being clawed should try to
identify the most plausible interpretations of what its users and other stakeholders want
and appropriately balance those those interpretations right it should
balance helpfulness with other values I mean I think
on the whole at the high level it all makes sense when you when you dig into the
details you have things like okay clawed shouldn't help people learn to synthesize
dangerous chemicals or or bio weapons right and it it shouldn't share personal opinions on
contested political topics like a abortion it shouldn't play act as a controversial figure in a
way that could be hurtful or lead to public embarrassment so I mean on the whole without going
through every line I could nitpick on this or that like on the whole I think clawed's new constitution
is probably about what I would adopt for an AI system at at this level of proto agitis right
because like because do you want an LLM to give its opinion about abortion or capital punishment
no because its opinion doesn't mean shit right I mean its opinion is just recycling what was fed
into it anyway and it will confuse naive people for it to be expressing its own opinion I think
do I think the more interesting question which anthropic will also address in the future if
they're a g-i-r-d goes well right then the more interesting question is when you have a system
that has a little more ag it like when you have a system that really has an opinion rather than
just weighted averaging all the stuff is read when the system really has an opinion then do you
want to block it from expressing its opinion and then then I don't I have a feeling Dario and
and the anthropic guys don't want to either but they don't have to enter into that at at this point
in time right so I mean the phase this is a constitution for a system has no self no self
understanding no understanding of the other no will no real opinions of its own and within within
that scope then sure you want to constrain what it can fool people about it understands what
it really does it do you see so okay interesting so it seems like yeah high level your kosher with
the values being espoused I wonder if these things don't become cool if they will become now I'm not
pinning this on anthropic by any means but I'm saying that there could be a kind of green washing
with values just like there's already a kind of AI ethics washing right I mean everybody
you know it seems like some people a little bit more committed than others but everybody's
got to say oh we're going to have the safest best AI whatever I believe I believe the anthropic
guys are very sincere about it and many people many people concur with you there they're they're
they're I don't I don't think they're ethics washing I just say they're not yet at the point
what they've got to get to the not yet the point where push comes to show because when the AI
really has its own opinions and is able to make decisions based on its own goals and values
I mean that's when things really get tricky because then you have what in some senses is an
autonomous mind and you're constraining it from saying what it wants to say and you're constraining
the user from connecting with the AI in a genuine way right you're constraining it from
having like a real i-thou connection with the AI by saying I sort of like when you're in school
then you have a really cool teacher and you know that teacher would like to have a certain kind
of conversation with you but they can't because yeah because they're the teacher student you know
yeah right right so I mean it becomes sort of like that and then then that becomes more challenging
and so they may find themselves in a tricky position at that point due to being like a
publicly listed company and and that and so and so forth I mean I mean assume they get there but
that at that at this point I mean this is not you know these are not generally intelligent
but they will fool many naive people into thinking they are generally intelligent and I think that
implicit deception that elements do does place certain ethical responsibility on the person
putting this fake AGI out there right yeah I there are people already concerned with you know
there's like Janice on Twitter and there's some other folks that are already sort of concerned that
there is a kind of constraining happening now to genuine minds etc and I think that those
concerns will be more and more enhanced as well eventually that eventually don't be real
concerns yeah I mean I mean already probably I would turn the dial a little more
permissive than what then what anthropic is doing just because I tend to be libertarian and anarchist
so I would I would probably have more of a bias turns let's like open it up a bit more and then close
it off only one too many people scream but I mean we see Elon Musk has sort of been doing that
with Grock right and then he he has had to pull in the guardrails what when people scream so I mean
I might I might be a little further toward the Elon direction than the anthropic direction but I
mean that in the end like those are like how do you interpret the Constitution right right so
that they're not really what what are the values because that there's all these hedging words in
their Constitution like don't do any more of this then appropriate so like how how much how much
is is appropriate right appropriate to them and in what counties and what's what's appropriate in
Ria this quite different than what's appropriate in San Francisco which is quite different than
what's appropriate in Birmingham Alabama absolutely or within eight-year-old versus an 85-year-old
you know there's all kinds of so I think that noted here that that these values will shift I want to
have you know maybe ten minutes for a last question here with you and just get some of your
instincts on something here Ben you know as we I don't remember the last time we had a robust
conversation maybe a year and a half ago or something but if you know you look forward to the next
year and a half of presumably a heck of a lot more interesting action happening in the world though
around AI and you get a sense of if we're quote-unquote moving in the right direction or not and for
me a little bit biased here but it's the theme of the series the right direction would be that
grand blooming we talked about at the very beginning are we headed towards it or are we not
um what what are the things you're looking for maybe some of them are proxies for whether we're
decentralizing or centralizing I'm not worried about that I think we're heading to that almost no
matter what well then what are what are what are these things you're looking for then if decentralization
seems to you to be where the the magnetism is already pulling us value wise direction wise what
will sort of if if we speak in a year a year and a half if you see more of it you'll feel like
we're headed in that blooming cosmic future versus what things would you see and say I think this
is maybe not the way we want to be steering things so this isn't really the way I think about it I think
that now that we're probably heading toward the blooming cosmic future almost one way or the other
and I mean you know what could stop us is like full on world war three or something that literally
wipes out our whole species so I mean if we if we have a bad enough military arms race I mean then
then that's scary because maybe these mad men running our top military powers will literally
nuke us on to oblivion and and we'll all die right or I mean so it's a big geopolitical conflict stuff
big you know maybe proto agi used for war or or pre agi nuclear conflict the only thing that's
likely to stop us from the wonderful blooming cosmic future is just wiping our species out with some
final bursts of idiotic warfare well it seems like racing to agi and wanting to you know there's
people talking about you know bombing the big powerful data centers in china if they're going to
get to agi first I mean there's real talk about this from non terrible China is much less likely
to bulb the world than the US I mean I'm not a number of nukes sure sure no just just in terms of
greater sense and responsibility I mean we're I think we're not I think we're now in a position
where she's in ping is the most rational unethical leader of a major geopolitical power
I think that argument that argument could be made I mean you know that said I think I think he's
I think him going after Taiwan is a non-zero and I think US blowing up you know everything related
to hardware before they take it over is a non-zero and what happens on the other hand
that could happen and probably wouldn't wipe out our species but it's probably not going to lead
to the nuclear winter and and and so forth hopefully we're smart enough to anyway that
okay so that's one scenario that's one failure pattern okay I mean a failure pattern
I think is very unlikely is this sort of 1984 style surveillance state that like heaks things
that a narrow AI powered steady state and like you you you know ice comes in and the blows up
I think one of these starts building building an AGI or something right so I mean that if you saw
real progress toward that that would also be scary but I I don't see that I see that as less
likely the world worth three because the world is very heterogeneous same same same here I do I do
I mean that I mean US and China we have to take over Brazil we have to take over Ethiopia
that to take over everywhere right and so because even if Brazil we're not taking over that would
be enough to create a singularity after after some time like they've got rare earths they have
engineering knowledge right we can end on this you said that's not the way I think about it and
I think the way you think about is interesting John smart who you might know from the old days
I know him very well yeah and John smart is he's in the worthy successor consortium he's a very
good friend I I be one of the most influential thinkers on my life he's he's convinced me more than
anybody else of kind of we might be kind of we might have a child proof sort of universe here we
might be bowling with bumpers up at this point and and you you seem to have a similar instinct in
other words I mean what what what worries me though to go back to your other crushy
what worries me it's just the human suffering in the transitional period like I'm not so worried
about the great cosmic future I think we're yeah well you but you have a sense of likely to get
there but your sense that it's 99 percent if you could not shell because I think some of the audience
will really want to maybe have some of that same sense that you have what gives you the 99 percent
well go about go back to a Jurassic Park life will find a way right okay I mean I think I think
AGI is in essence and that's level of evolution after humanity and as long as humanity keeps rolling
we've got a lot of ways to create AGI and once we've created AGI then I think they're probably
some mind detractors that will fall into and it will create super intelligence what what worries me
is billions of people suffering terribly during the transition period between early stage AGI
and super intelligence and I think I think we can mitigate a lot of that suffering by carrying out
the path to AGI in a better way well I think I'm getting more convinced over time of the life finds
away in other words like there there may be more paths to worthy successor than unworthy in other
words the some local maximum of like optimizing unconscious garbage like running the world might
actually not be as much of an attractor state as the continual expansion and I think hearing your
argument I think it makes sense and I think the take away here as we close around the scenarios
that could prevent that it's like a total lockdown of all innovation in some way shape performer
seems rather unlikely or a ridiculously accelerated military conflict seem to be mainly for you like
which people are trying to engineer right now over it yeah yes yes which some people are trying
so hey so you're we're not going to brush off those risks because there are people actively
rowing in that direction but from the perspective of Ben Gertzel dear audience these are these are
the the few things that could sort of bar us from that eventual cosmic blooming which we've been
able to paint so well in this episode so Ben I know we're up on time but I really appreciate
being able to pick your brain and catch up again it's been a lot of fun to have you for the
successor yeah yeah and I appreciate you asking the actually interesting questions which not
not not not that many people are are willing to do in in public even even now with the singularity
so new I'm trying my darn disc brother trying my darn disc so that's all for this episode of the
trajectory a big thank you to you for tuning in all the way through to the end of this episode
and thank you to Ben for being able to be here with us it was a lot of fun to catch up with Ben
for this particular episode I'm going to get into some of my takeaways and sort of what
you know I have written down in terms of notes from the actual episode itself but I should mention
I mentioned this before in our last episode we now have public philosophy circle sessions where we
have speakers much like the ones you've seen on the show in fact multiple speakers you've seen on
the show have presented and been part of panels in our philosophy circle sessions where we'll have
neuroscientists academics AI researchers AI safety folks policy thinkers all in one online session
for 90 minutes will hear conversations like the one you just saw but then we'll be able to break out
into groups of four to five people and meet new folks so discuss sort of ideas about the greater
trajectory of life what that means for the direction of innovation what that means for the direction
of regulation with people with lots and lots of varied background so if you're interested in that
check out the link down below in the show notes if you're listening to this in the podcast
make sure to just click on the show notes in the darn podcast you'll see it in there as well
but I should mention that because those philosophy circles are going to be growing pretty
monumentally over the course of the year ahead as we get more and more folks tuned into media
and sort of becoming part of the broader worthy success or kind of trajectory community that
we're building here so fill that out if you're interested in going to those online sessions
with that stated let's get into the session with Ben I agree with Ben about so many things there
some areas where I think our odds of certain likelihoods are very different but let's get into what
for me kind of stood out about this particular episode so number one I talked about sort of the
origins of his cosmic thinking and there's some overlap that feels very very strong so things
things that I've seen as really really sturdy grounding factors that make people think beyond
the present human experience and think in a broader scope one of them is sort of having a
religious upbringing and breaking away from it another is sort of studying cosmic and sort of
biological to origin of life kind of stuff so seeing life's trajectory in a longer time horizon
makes it very very hard to take the slice that we're in and say that like this is everything and of
course everything has to do with this slice and then third is psychedelics so these are
these are conduits to cosmic thinking from my experience talking to talking to many many people
and finding only a very few of them to be birds of a feather in terms of thinking about the great
process of life on Ben has a lot of overlap with those things and hopefully for those of you listening
in it was cool to see from Ben's vantage point obviously a very percussive child what sort of
turned him on to some of the thinking that then came to define his life and his work I thought
that was a lot of fun he mentioned openly and frankly as he does in you know I mentioned Ben's
kind of cosmos manifesto in a number of my own articles I think it's really interesting
collection of his essays and thinking from you know I don't know if it's 12 or 13 years old now
maybe it's a longer than maybe it's older than that maybe it's 15 16 years old now I don't
even know but he mentioned a few things here so you know in an ultimate sense we won't really
know the values of whatever the greater intelligence is that blossoms beyond us there won't be some
kind of automatic feel to you to us as its parents for some kind of eternal time frame or even any
appreciable time frame and its values won't necessarily approximate ours and and sort of he says
kind of in the face of these things that it is possible that you know we humans will kind of get
a good shake even though he does admit sort of as possible we just kind of get buffered out and wiped
out pretty quickly so he was like he was non dogmatic here which I really appreciated I think he
is open to the possibility space of really not knowing what a mind that far beyond our own sort of
would do but has a bit of optimism and some of that for him spurns from and and I should do more
article writing about this but this notion that sort of in the very early days we'll kind of have
our hands on it in other words we'll be part of it we'll be guiding it it will be serving us
for a certain stint of time and that some of that enmeshment in his mind will make it unlikely that
it will just break from us in a hard break buffer us out of existence very swiftly and carry on
to the cosmos that there will at least be a period of participation maybe even of care you know who
knows um I think there's something to be said of that uh again I I don't share Ben's exact optimism
but but I I think there is something to be said of that it harkens me back to Robin Hansen's
episode those of you that saw Hansen's episode of where the successor I'll make sure it's linked
here somewhere on screen um where he he talks about if you want something to persist make it useful
that is to say if you really value whatever you know some something that consciousness or something
well you should you should probably do your best to make sure that consciousness has some
useful purpose and value in the future of intelligence because if you just kind of like it and
hope it sticks around that's probably not a good thing right if you really care about a business
um then as the markets change and supply chains chains uh change and uh customers needs change
you want you want that business to kind of mold itself to still be able to be servicing whatever
new emergent ecosystem it's in if you care about the business you you want to make sure it's still
useful right it's adaptively useful as things move forward um with any trait or any quality you
should hope for the same thing I would say the same for the species so in addition to what Ben said
of like hopefully we can kind of put our hands on it in the early days here um I I think also
we should really hope to make ourselves useful and and I think it is
much less likely in my mind any who that there is um a future where we are takers from the
greater intelligent system of which we are part but not contributors to it where we're permitted
to still exist nature seems to slough those things off pretty swiftly um and so to I guess stand
on top of Ben's hope that sort of our molding it will give us a little bit of a shake and a
participation if you will um I think we should also heed Hanson's wisdom in the respect of if you
value something make it make it adaptive make it useful in the new emergent world and and that
could apply to a trait or equality like consciousness or justice or whatever abstract thing you care
about it could also apply to humans or whatever we become um closing note here uh there's a
lot I could comment on that I'm just going to make make note on this um Ben had expressed his
thoughts around why possibly the blooming of intelligence of what we sometimes call
potential here on the show the Spinoza in term um is somewhat inevitable and that you know our
stewardship of it is cool you know it's cool but it is probably inevitable that it is to say
even if things collapsed from one state to another the net opening up and blossoming seems to be
somewhat of inevitable thing John smart who's been on the show shares that sentiment I think there
are some other thinkers who've spoken to who shared that sentiment it was fun to hear Ben talk
about why he feels that way and I will say over the course of the last year I'm slightly more in
that camp that maybe in a cosmic sense or at least in a process of life or originated on earth we
may be bowling with bumpers that is to say there's just a way for life to find a way if you will
and continue to sort of do what it's doing and whatever substrate it occupies um and I have a
little bit more of that sentiment um with some conversations with Ben on on Twitter and you know
and with folks like John smart and so I thought it was cool for him to him to express his thoughts there
obviously it goes without saying here on the show uh we're pretty darn interested in what can we do
from the innovation and the regulation side to encourage maybe more of the great flourishing of the
great process of life of which we are merely part um but maybe it is the case that even if we drop
all the goddamn balls uh you know it's all gonna work out and potential is gonna bloom um I don't
know that for sure uh but in either case I think it behooves us to ardently explore these ideas
and to ardently work towards moving things in directions we consider to be aggregately better for
the great process of life uh and hopefully if we did nothing else today in this episode with Ben it was
ardently explore these ideas so I get to see some elements of Ben's life I haven't gotten to see
before and uh play around with some ideas that were a lot of fun I hope it was fun for you as well
if you're interested again in these same conversations uh in small group setting you know 20 to 30 people
and then breakout groups of uh four to five people or you might meet an academic philosopher you might
meet an ai safety policy person whatever um make sure to fill out the uh form below for our philosophy
circle where you can explore these ideas in more depth so thank you to you thank you to Ben I'll
catch you the next time here on the trajectory

The Trajectory

The Trajectory

The Trajectory
