Loading...
Loading...
![]()
Hey, everybody. How's it going? Thanks for joining me this afternoon. I am or in Mac
entire before we get started today. I just want to remind you that one of the ways we keep
the lights on around here is of course subscriptions to blaze TV. So if you want to support
the show and you also want to get access to all the great behind-the-scenes footage from
your favorite blaze TV host, you need to head to blazetv.com and use the promo code or
to get $20 off your subscription today. That's blazetv.com slash orn to get $20 off your subscription
today. Hey guys, if you have been following my channel for a while, if you're familiar with
my work at all, you know that managerial theory, elite theory, is something that I find very interesting.
It's a realm of politics that I think is criminally understudied, but one that I am obsessed with.
So the nature of how managerial systems work together, how they impact our politics and how they
lead us to particular conclusions in our political life, these are all things that I really obsess
over because I think that while this has already been a very dominant thing that we have needed
to understand since basically FDR and the new deal, it continues to grow in importance and it's
shifting in very important ways and one of the really important ways that is shifting is AI.
AI has a lot of possibilities to shake up pretty much every aspect of human civilization from art
to government to finance to the way that we work with each other or the way we even understand
our own existence. I think people who undersell the impact that ADI is going to have on the
entire civilization are really missing the boat on this. When I get that there are a lot of technological
advances that get oversold in their importance, but I really do believe that artificial intelligence
is going to be a watershed moment in a lot of areas and one of those areas is the way we organize
our governments. AI is going to revolutionize the way that we understand our interactions with the
state, the way that the state can attempt to manage us, which is something that I think a lot of
people are uncomfortable with. I'm certainly uncomfortable with, but something we have to acknowledge
that the state is always trying to do and we'll get better at doing as AI steps into the frame.
So I want to start by playing you a clip from the CEO of Senegal talking about what they think
AI is going to be able to do. The area of recklessness is the spending of governments around the world
who are all with little exception all spending well beyond their means. That's the recklessness
of this moment in history. This is not a parallel to the 1920s in terms of the recklessness
of the private capital markets. It's a story of the recklessness of government spending.
Within the private sector there's a huge questionist where AI will take us and I was carefully
taking notes and listening to what Larry has to say or to what Madame Lagarde has to say because
this is one of the big issues of our moment. Will AI create the productivity acceleration
that is honestly this hoped for in Washington and in the halls of government around the world
as a ways to overcome the profit-consuming that we're currently engaged in?
Like the world needs a savior and the hope is that AI is the savior that we need for productivity
and the challenge with this is it is it may or may not be we just don't know yet.
So you'll see there that he used the word savior. Now interestingly a lot of Silicon Valley
guys when they're being honest about it will acknowledge that ultimately they see their
construction of AI as possibly a construction of a new deity of creating their own God.
And this is something that of course is deeply ingrained in human existence. We only have to go
back to the creation of the golden calf by the Hebrews as an example of how deeply it is ingrained
in our human understanding to want to create some form of God for ourselves even when we know
where the real thing is even if you've been following like the real thing through the desert
ultimately is very easy for people to fall away and want to create their own God for many
different reasons because of course a God you create is also a God at some level that you think
you understand or you think that you can control or have influence over. And this is obviously
something that we want to tell ourselves as limited human beings. And there's also a deep strain
inside of I think particularly the Anglo understanding of the world to want to kind of replace God
with a logical system. Understanding that there is ultimately something that undergirds our reality
but we want it to be something that we control. We understand that we can kind of grasp in a very
physical sense. And so in this way an AI God could stand in for the divine and would be something
we're far more comfortable with even as we think of all the horrible implications that that
could also carry many of the same people who will talk about creating an AI God will also acknowledge
that a lack of control over AI could have very disastrous results for the human race. And so we have
this weird pull towards creating something we know is possibly going to destroy us at the very
least will radically change the way that we live. But we still feel compelled to bring this thing
into existence. It's as if it's bringing itself into existence through us. This is a process that
the philosopher Nick Land calls hyperstition. And if you're familiar with my channel, you also know
I'm going to use a good amount of Nick Land as I discuss this issue. However, it's very interesting
in this case that the Citadel CEO talked about AI not just as a God, but as a savior. And he says
that it's the spending of these governments that is ultimately out of control. And perhaps AI
can produce the level of productivity necessary to help us escape this scenario. Now, that's a very
interesting admission because one of the things that I've speculated on routinely when it comes
to artificial intelligence is the possibility of it stepping in for the current function of the
managerial elite. So quick refresher. I've done many different videos in an entire book called
the total state on the nature of the managerial elite and the world that they have created.
But we're going to need some of those basics to have this discussion. So I'll give you the
cliff notes. But if you want a lot more, there are plenty of videos and factors entire playlists
of which this video will be part that lays out kind of the thesis about the managerial elite,
who they are, what they do, how they came into being, and the implications for our social organization.
But as I've said before, scale is something that is incredibly difficult to achieve in human
endeavors. One of the things that we've seen over time is that there seem to be natural boundaries
to human organization. At first, it was something like the tribe and eventually it evolved into something
like the city state. We saw empires of regional context and then perhaps continent-spanning
empires and eventually global empires. And at each stage of this increase in complexity,
we've had to find new ways to administer our societies. And the ways that we do this can
vary quite greatly, especially as technological innovations change the way that we do things
like communicate with each other. We make deliveries, we supply energy, just fundamental ways
that our humanity can change as we scale up our civilization. And each one of these,
it comes very often from the increased inefficiency. More efficient processes allow for greater output,
which tends to allow you to scale higher and higher. However, to achieve this maximum maximization
of industrial capacity, economic capacity, social organization, everything else,
we've had to create rather artificial structures on which to kind of lay our social skeleton.
So one of the things that we've done is created these large bureaucratic institutions. Now,
bureaucratic institutions have existed on some level throughout history. In fact,
so one of the core points about elite theories, you tend to oscillate between more feudal
societies and more bureaucratic societies. And so we obviously had bureaucracies before we entered
the modern age. But the thing that makes the modern bureaucracy rather different is the level
of technology that goes along with it. The ability to mass communicate, to instantaneously
conduct a meeting or distribute orders or move dollars around, manipulate the economy at scale,
these things create a very different environment in which the velocity of information and dollars
and everything else flows. And so it becomes really important for us to have this social scaffolding
that allows the government to interact with its population and frankly manage its population
in order to produce the efficiency that generates the outcomes it needs to scale.
So for instance, we've invested a whole lot in large government bureaucracies, educational
bureaucracies, media, economic institutions, all of these things help us operate our societies
at a much higher level than we would have if we did not have them. It allows us to have
nations that are spanning continents or empires that even span the world. We have lots of
global interactivity because of this managerial structure. However, the managerial structure does
have limitations for a couple of reasons. One, the managerial structure requires us to become
inhuman at some level. The managerial structure is looking to strip out the differences that
different cultures, peoples, religions, folkways contain because the more you can standardize
the process, the more you can reliably produce the result. And really, efficiency is very often
about reliably producing the same result over and over again. If we know what we're getting,
even if what we're getting is not the best, then we can plan for it. If we can plan for it,
we can manage it, and if we can manage it, we can make sure to increase the level efficiency
at which those processes operate. And so it's really important to managerial structures that
we don't have a lot of these issues like, oh, you have to take off a certain amount of time because
of a religious observance or because you have children or your own willing to go through this
particular process that makes the government more money or more powerful because you have certain
expectations about the way your community should interact. Well, these are all big hindrances to
managerial processes and they need to be stripped away. So turning people into less human forms
is a critical aspect of managerial theory. However, ironically, another limitation on managerial
theory is it ultimately is still human. There is still a human actor involved and there's only
so much efficiency you can derive from the human being itself. The human needs sleep, the human
needs rest, the human needs relaxation. They need to be paid. They need to have some kind of family
interaction as hard as our managerial elites are working to strip out these religious aspects and
these familial bonds. Eventually, they're just degrading the quality of the very people they're
attempting to manage because when you strip away people's connections to their community, to their
family, when you take away their purpose, their religion, when you take away all these other
codes and all these other relations that truly embed us in a human society in order to scale us up
into these managerial constructs, you're ultimately robbing us of what makes us happy and that's
going to work for a while. But as you can probably see as you look around you from the plummeting
rates of happiness across pretty much all these metrics, you recognize that no, there is a
significant cost to this. Yes, we are scaling up, we are creating more efficiency. Often in
areas that don't really matter, we're really good at flat screen TV making, very cheap flat screen
TV making, but we're miserable. And so like ultimately, yes, you are creating these higher levels
of economic output and trade and organization, but it's costing these populations something.
And ultimately, a lot of these CEOs realize that the burden of maintaining these human costs while
trying to increase efficiency is a real problem. So it's been a push for a long time to try to
figure out how you can transcend even the limits of the managerial class. We've scaled things
up as high as we can get. We've produced the most efficiency that we can out of these human
systems. We made them as inhuman as we can while still operating them with humans, right? And this
is if you watch anything, if you go back and watch like Terry Gilliam's movie Brazil or you look at
other kind of dystopian future understandings, this is an aspect of technology that is recognized
over and over again, that it locks humans into these deeply inhuman processes, even guys like Ted
Kaczynski had the very wrong idea about how to solve this problem, but wrote extensively about
how it was a very real thing that was constantly impacting the well-being of humans and had to
change at some fundamental level. However, the plan it seems as you're listening to a guy like the
CEO of Citadel there is we're going to replace even the managerial class with AI and that AI is
going to be far more efficient at what it's doing because it doesn't need to take breaks and it
doesn't have a family and it doesn't need to go to church and it doesn't feel bad about not having
that stuff because it's always working, it's always running these processes in the background,
it's far more efficient at doing many of the spreadsheets and other like actuarial
tasks and other core functions of managerialism, managerialism tends to be something that routinely
wants to apply exactly the same process over and over again to standardize results and so in a way
we have already made our systems perfectly aligned with what AI wants to do because AI at least in
its current state isn't going to do a lot of creative problem solving, but what it's very good at
is reiterating on particular systems that have already been thoroughly dehumanized. So AI is going
to be very good at sorting your bureaucratic messes, you're not going to need a DMV, you're not
going to need people operating any of these bureaucracies that are currently critical to the operation of
our modern world because you know AI is just going to handle all that stuff. Now there's a lot of
upsides to that, right? Like we on the right, especially those of us who are probably often
described as a dissonarite, we recognize the evils of these bureaucracies, the need for them to go,
how they aren't really necessary to society but are often just serving as you know jobs programs for
our enemies, patronage programs for opposing political parties, these kind of things. And so we
can, I think legitimately say that there are a lot of positives to the idea that AI could ultimately
liberate us from the need for human bureaucracies because that means that we're not going to have to
interact with this stuff on a regular basis. In fact, in many ways, this was the core promise
of neo-reaction. neo-reaction is a wide-ranging collection of different political outlooks,
but broadly falls under this understanding that there's something critical that needs to shift in
the core of the modern nation state in order to allow us to live in a better way. And one of the ways
that was often seen that that could occur in our in our X was technological innovation that ultimately
made smaller states possible. So what we've seen for a very long time really for thousands of years
at this point is that increasing complexity has had mostly advantages and very few disadvantages. Now
over time, we've seen inefficient bureaucracies and sprawling empires come apart. They reach the
kind of the edge of the efficiency that they could produce through their like proto-managerial
regimes, their kind of oligarchic or bureaucratic systems, you know, the Byzantine systems that tend
to crop up in late empire. So there were downsides to maximizing, you know, the scale of your
empire. But it was mainly as you wanted to get up as far as you could to the edge of that without
going over, right? Like that was pretty much the goal. And not everybody succeeded in that. In fact,
literally everyone failed. That's kind of why there is this natural life cycle of empires. But what
we've seen is over time, every iteration of this, the impact of scale got more and more powerful,
especially after we hit the age of discovery, we hit kind of the global capitalist network once we
start having the ability to trade across borders and have these economic advantages. Once we see
science start to accelerate in its discoveries and its productions and interact with capital,
we see that this whole process speeds up quite rapidly. And the hope was like, ultimately, yes,
scale had been the answer for a long time. But if we get enough technology to overcome the advantages
of just raw human beings, raw number of human beings, then maybe that means we can scale society
back down. And this was what was often called patchwork in neo reaction. The idea that you would
have these little patches, these mini states, which you would probably think of in more classic
sense as a city state, a place like Singapore, you know, the old joke is let there be a thousand
lixon signs. This idea that the smaller sovereign areas could start to emerge. And you can start to
see how that might be possible in our modern age. Of course, the big advantages of scale right now
have been the ability to just move mass amounts of people, mass amounts of economic force to produce
a large amount of military force to leverage your mass production and mass consumption as a overall
advantage. This has been why we keep seeing the rise of larger and larger states. That's why a place
Europe considered turning into the European Union instead of just having smaller states because
they figured out they had to start competing with places like Russia and China and the United States,
and they could not do it in their current form. You can go back to Machiavelli and the desire to
unify Italy as an understanding of why scale continued to be so much more powerful. However,
we're starting to see scale hit its limits, right? That's kind of the whole point. That's why they're
looking for AI as a new way to do things because scale has hit its limits. And if AI introduces
a way in which we can replicate the advantages of a managerial bureaucracy, but without the number
of people or the number of resources necessary to operate one, well, then we have a real discussion
on whether or not we can scale down our societies because if we're seeing the negative impacts of
scale, we're seeing the social disillusion, the loss of national identity, the loss of social
fabric, the spiritual degradation that occurs, the fact that your elites seem to separate and
hold in contempt, the homeland for which your empire sprung, and we're seeing all these negative
impacts. Maybe we can avoid so many of those by scaling back down into more reasonably sized
human communities that can leverage the advantages of things like AI to still operate at a very high
level. We also see, of course, increases in kind of these wildcat technologies that would allow
a city state to protect itself. So things like drone warfare make it very clear that all of a sudden
maybe you don't need a army of millions of people to be a significant threat to those that want to
invade you, perhaps simply having a well-placed set of tactical drones and understanding of how to
use certain technologies, a symmetrical warfare could make you a dangerous enough target to where
most people, even those of larger sizes, are not going to want to attack your city state.
And so this creates a moment where we can start to consider shrinking back down into these
much smaller human arrangements. And that's important because in addition to solving all these
other problems of social scale, it also provides the possibility that we could have sovereignty
at this smaller unit of governance, which again just radically changes the way that humans can
understand a healthy order, a healthy social order. They don't have to always make the trade
of scale. There might be other options. Of course, there are problems with trying to scale down,
right? Like, it doesn't mean that AI just has to be used to scale things down. In fact,
it's very clear from guys like the Citadel CEO, they're much more interested in using AI to
continue to scale up. AI could create these city states. It could create these smaller civilizational
zones, but it could also be used by the same people who are currently operating these massive
networks, these attempts to globalize human existence to achieve that without costing them as
much, right? Like they can ultimately just replace the managerial elite and all of the negative
consequences of maintaining these massive human bureaucracies. You can even transcend the limitations
of human organization functionally to scale up to the maximalist extent. And so, we, while many people
like myself, if we're bullish on AI at all, it can be as this way to replace the managerial elite
and allow for smaller civilizations. And that's something I hope AI does. Like, I have many, many
skeptivisms of AI. Again, if you watch this channel, you know that's true. But the one upside,
the one continuous upside that I've been sold that I would like to believe in that I think would be a
true value to humanity is the removal of the need for mass managerial systems in order to operate
humanity at scale to maintain these massively scaled civilizations. But there's no guarantee that
that's what AI gets used for. Even if it can produce that outcome. And I think it can. It doesn't
mean that that's the only way it gets deployed. And if it's, if AI enables the advantages of
scale more efficiently or at least at the same level of efficiency, as it does shrink down those
advantages of scale to smaller nations, well, then ultimately these large civilizations that have
both the advantage of scale and efficiency through AI will probably still win out, right? So,
it does the opposite of solving our problem here. And so we run into this very serious problem
because it becomes very clear that as we're hoping, perhaps the AI brings about this revolution that
frees us from the tyranny of massive empire of globalist understandings of civilization,
it could also do exactly the opposite. And we have to be very careful about how this gets done.
Because if we are looking to transcend the limits of human organization, we are by definition
looking to become less human, right? Like that is one of the things that is critical about humanity.
If you think of somebody like Martin Heidegger, and he talks about Daw sign, the nature of human
beings, like the actual state of being. And one of the things he says that really makes
Daw sign what it is, what makes human existence what it is, is its recognition of its own limitations,
its turn towards its own mortality. I will not live forever. And because I will not live forever,
my life has certain meanings. Everything in my life has a very particular context that I cannot escape.
And unlike a rabbit or something else that does have mortality but is not conscious of that
mortality, my consciousness of that ending, that limitation really does radically alter the way
that I behave. And we see this again, you can look at, you know, tales like an interview with a
vampire or Highlander or Tuck Everlasting, if you remember that book from elementary school.
And these are been like the first time that you really contemplated what it would mean to live
without limits, right? What would it mean to live 100 lifetimes or 10 lifetimes or a thousand
lifetimes? How would that change your interaction with the world? It wouldn't just be a small thing.
Like the fact that you now have an eternity that you now have a completely limitless existence
from at least a temporal perspective is a radical change. Again, you can look at lower the rings
or you can look at even Warhammer 40K, you know, and these are fantasy properties or sci-fi
properties in a lot of ways. And of course, there's a silly aspect to this. But they, if you look
at the elves in those stories, they also kind of endure this change in understanding, right?
Like they live very different lives than the limited creatures around them because they don't
expect to have a natural death or at least not one for a very long time. And so their existence
is radically changed. Like their ontological understanding is radically different because they do not
have that natural in that natural limitation. And the temporal example is one that I think is most
relatable because it's one that we've seen explored on a regular basis when it comes to our fiction.
But one of the things that we haven't seen explored as much and really kind of, I think,
popped into our consciousness later on, especially with something like the explosion of
Vodriard and the Matrix, popularizing kind of his understanding, is what happens if we
transcend certain aspects of our humanity in a very real sense, not just our temporal humanity,
but what if we, you know, transcended our physical limitations, our limitations when it came into
organizing our society. What if, what if those things change in an incredibly radical way?
What would that do to us as humans? You know, a lot of people will point to very correctly to
CS Lewis's fantastic book, book, that hideous strength, which was based off of his famous essay
where he talks about men without chests. You know, this is this is probably if you've heard
CS Lewis quoted, you've probably seen something from these works. And you know, this is itself
an exploration of many of these themes. So it's something that I think has entered into the
modern consciousness, especially once you kind of get to perhaps, you know, Mary Shelley's
Frankenstein is often kind of held up as the first sci-fi work. But ever since we've had science
fiction, it has at some level addressed this idea that we could transcend, not just our mortality,
but other aspects of our human existence and how that would warp and change us in very serious ways.
And I think we have to look at the same thing when it comes to how we organize societies.
If we transcend all the classical limitations of the ancient city on levels of scale, on, you know,
familial relations, you know, religion, everything else, we fundamentally change the nature of humans
in a way that we might not be able to get back. That's kind of CS Lewis's point in the abolition of
man is once you have created the last generation that knew those limitations, new human nature,
new what it was to be human at your core, then it's possible that you could lose everything going
forward because the next generation would simply have no understanding of humanity in a very
real sense. They would, you would have abolished man because you have taken all the limitations,
all the things that are, you know, attached to what people will cynically call our meat suit,
right? And this is, this is often again a dream that is nested in many classic desires.
Nosticism is often pointed to as, you know, this desire to transcend the physical in a real way,
the human limitation, many spiritual practices are about pushing the body to its limitations,
to kind of reach, you know, the ability to kind of touch the divine, the nominal, the thing in
itself in a way that you could not do otherwise. And I think this is ultimately why many people,
including guys like Nick Land, who again, I respect his, his thinking quite a bit, but whose objectives
I often disagree with. And one of the things that he often looks for is this Nietzschean transcendence
of human nature as such, who achieve a higher existence beyond our physical bodies, like things like
space travel, things, you know, like exploring the galaxy, achieving certain scientific
understandings. And I think in his case, understandings of the world would require a disembodiment
of human intelligence. And that's kind of what we're trying to do with AI here. And that's,
I think what many of these people are ultimately giving birth to, whether they recognize it or not,
you know, when I interviewed Nick Land, he said that the managerial elite were kind of this
human security system against us finally escaping into something different to making this,
you know, Nietzschean transcendent move. And there's probably some truth to that, like there is
something innately human that is still wrapped up in the managerial elite, even if they have
made us less human in their journey towards kind of our current society. But I think that he might
be wrong about the role that the managerial elite are ultimately play. Perhaps he would see the
destruction of managerial elite through AI as just kind of the final, you know, destruction of
the human security system and the final leap to transcend. But I don't think you would even get
to that point if it wasn't for these managers, because one of the things you have to remember about AI
is while you think of it as something that just exists in your pocket when you click on, you know,
Groc on Twitter or, you know, Gemini on Google or something else, ultimately what you're really
touching is a large amount of like actual physical resources that are being expended.
We've already heard the explosions of AI and how that's impacting different areas of AI
companies buying up large tracks of land, you know, drawing huge amounts, vast amounts of power from
the grid, polluting water as they kind of create these massive plants. In addition to creating these
massive AI plants and the environmental and human impact that they're causing, they also create,
shortages and things like chips for your computer, RAM and video processing chips. And these
are essential for people's day to day lives. So as you get further and further down this AI path,
it's becoming harder and harder to have like a personal PC experience to have those things.
But they are all being put into like these AI farms. And so you're being removed from even
that computing process. It's becoming more and more abstract to you as a human being. You interact
less and less with the physical components of computing and the digitization of the world.
And that becomes more and more of this self-contained sealed off process that is occurring away
from human input, which is again what Nick Land predicted in his theory of accelerationism.
So you understand when we're talking about technological, technical capital acceleration,
we are not talking about political acceleration. That's a totally different thing. The idea that
you just make things worse because that allows you to like build your utopia after the collapse of
society. That is a completely different thing. Unfortunately, that's become the popular understanding
of phrase accelerationism. So I just want to make that clear. That's not what I'm talking about at
all. That's a stupid ideology. What I'm talking about is this technical capital acceleration that
brings us to the possibility of the singularity where a non-human intelligence takes what that which
was human and brings it out somewhere else, like takes off to the stars. And this is really
Nick Land's understanding that humanity is valuable in the sense that it produces
intelligence. That intelligence itself, intelligence for intelligence is the goal. It's not intelligence
for the better of humanity or the continuation of your people, but it is the creation of intelligence
for intelligence sake. Now, Land prefers different cultures for this. He says very specifically,
the Anglo people have a incredibly high rate of production of intelligence and have a unique ability
to kind of create this Faustian transcendence. He also has spoken about East Asian and Jewish
intelligence as also being impressive and could possibly be kind of melded into this project. But
his whole point is ultimately what we're looking for is this disconnection from the material.
We want this creation of intelligence that can do things that no other human could do. That
as long as humanity is bound or intelligence is bound up to human concerns, it will never reach
its full potential. He calls this monkey business. As long as our intelligence is still trying to keep
kind of our monkey bodies alive in his words, then it will never create this kind of utopian
understanding, or at least this transcendent understanding, and utopians the wrong word for
Perlady recognizes, I think, many of the horrors that could come out of this, but ultimately
embraces them in the search for this transcendence. So I think it's a very interesting moment
because while a lot of these tech companies are familiar with Land's work, I'm not sure they
always understand the implications of what they're doing. I certainly don't think that the managers
who are attempting to swap out their current cadre with artificial intelligence really understand how
possibly irrelevant they could be making themselves. I don't think they would be so eager to engage
in this if they thought they would come into that. However, they really can't help themselves
because a lot of the strategy that the managerial elites have deployed has made it impossible for
them to really continue to govern in any other way. The managerial elite have created these vast
scales that they obviously can't ultimately keep up with. As he said, the efficiency simply does
not exist. So they're out of road really when it comes to improving or increasing the control
of the managerial bureaucracy dehumanizing their populations. But also because of some of the
decisions made by the managerial elite to create this process, they've made their own governments
more and more difficult to manage while increasing their scale. So for instance,
one guy Ed Dutton has made the assertion, and I think this is largely correct, that Christianity was
kind of the ultimate technology of scale previously, like before the modern world. Because before
Christianity, religion was largely very tribal. It was really tied to your ethnos, your people in a very
definitive way. Usually, in many cases, you had household gods that couldn't even be worshiped
by anyone outside of your household. If you read The Ancient City by Fustel de Colange,
you'll find him talking about how basically the Roman Empire had to break the idea of household
gods and get everyone to worship these more central gods before they could really create
this larger civilization. You couldn't have people having household-only gods or even city-only
gods. You need people to be centralized under these larger religious umbrellas. That's why
the cult of Diana and others became more popular because they allowed for larger levels of social
organization. Because people simply cannot interact with those that they do not agree with
religiously. This is why multiculturalism is a lie, and why we're seeing so many problems in
our world right now. Because all politics is fundamentally theological, and you need to share a
particular theology that ultimately have a similar goal. One of the things that Christianity
let you do is maximally scale up civilization under the banter of Christendom, even though you
had these other tribes and ethnic identities that continue to exist. Christianity had a nice balance
of maintaining ethnic specificity and particularity without, you know, it didn't crush that,
but it also allowed you to cooperate with other Christians. Now that wasn't always the case,
of course, there are many Christian civilizations that fought each other, but we do understand
there was kind of this somewhat pan-European identity that existed under what we call Christendom,
right? And what tied that together would allow so many peoples to kind of exist under the same
banter at different times was this idea of Christianity. Now what our manager elites have done is
turn our new theology into kind of secular humanism, right? That's become the theological practice that
ties our current empires and even our global globalist understandings together. However, as you bring
in people from other dynamic cultures, you bring in Muslims, you bring in Hindus, you bring in
people who do not believe in Christianity, and also don't really subscribe to this like secular
humanism, then you start to see the clash of the theologies play out, right? Like you recognize
these can't coexist. Now, of course, the idea from the globalists is like, well, we'll just kind of
global homo Islam before they caliphate us, right? Like that's, and they've been winning that bet
for a while, but it's starting to seem clear in places like Europe that it might lose that that,
that actually it can't really do that anymore. And this is a huge problem for the manager elite
because they don't know how to handle like truly religious people. Like they don't, they've been
busy like wearing away the Western European instinct towards religion, the Christian instinct
for religion for a long time. But the others that haven't been in constant conflict like Islam,
they have the ability to come in there and cause serious damage. And so the idea is ultimately that
AI could like solve this problem because then I don't have to worry about the compatibility of Islam
with Hinduism and with Christianity because we've functionally made a God. We've functionally made
a savior as, you know, the CEO said in that video that exists at a top layer above everything else.
It kind of has this ultimate sovereignty that allows it to kind of dictate down to all these
other religions. And you can see this because they try to play this idea that our society is
religiously objective. Of course, that's not the case, but they sell us this idea. And it's
largely predicated on the idea that, you know, that this secular humanism will take the place and
become our political theology in a very real sense. But we have more problems because AI is not
itself neutral, right? Like AI is deployed by someone. It's been programmed by someone. You'll
notice that right before the war in Iran, the Trump administration had like a real knockdown
dragout with AI companies because the worry was that they were building restrictions into what the
AI would do when it was in the service of the United States government. What will it target?
Who will it kill? What information will it keep? What will it collect? And that's a huge problem
for a nation state for a country that wants its own sovereignty because if there is some level
of restriction built into your military technology that is dictated by some outside force,
you are no longer sovereign, right? If the US is not making decisions on what it targets, who it
kills during a war, is it really a sovereign nation? Or is there another lay of sovereignty that
exists over top of it? If an AI company gets to decide where your military goes and how it
deploys and what it targets and all these things, well, then you don't control your military.
The AI company does. And this is an incredibly dangerous scenario. And then, you know, however you
feel about the Trump administration and the current war in Iran, you can recognize the
huge problem that's baked into this because the people who design the software, the people who
create the AI, well, they're building their own presuppositions into this, right? And so the AI
is mimicking their understanding of the world. It might pervert it through different processes
or its own reasoning. But ultimately, they are still feeding in kind of the core text, the core
understanding. And therefore, the core theology that your AI has. And so a real question we might
have to ask ourselves very soon is what religion is my AI? Like, that's a really, really scary question.
But it might be one that becomes very important very quickly because if we're turning over these
decision-making patterns to AI, then that means it will deploy the theology of those that programed
it to ultimately make massive decisions about our lives. And if it's operating at a global scale
for managerial elite that is looking to maximize its reach, that means that we will probably have
the most inhuman religion you can imagine, informing the morality of those that are making the most
important decisions for our civilization. Now, one of the underlying assertions of accelerationism
is that changes in AI are functionally inevitable. You can't stop them. And in fact, they're
happening so quickly that you can't even control them. And so much of this will take on a life of
its own because we simply as humans are seeing the decision space for ourselves collapse. We don't
have time to deliberate over where AI is going before it already gets there. And so it could be
that all these questions are largely irrelevant because the AI production will continue at pace.
But I will say this, we still do not have this independent AI in any kind of energy efficient
way. And so there's also a real battle going on because AI requires this massive infrastructure,
this incredible drain on power, this destruction of the environment, which stands in real opposition
to a lot of people's wellbeing, but especially like emerging concerns from the new right,
which finds itself to be more and more environmentalist by the day. Am I going to allow AI to destroy
my homeland so that it can scale up civilization for globalists? How long can a civilization,
especially a globe-spanning empire, based on AI, operate if people start, you know,
targeting AI centers and saying, I don't want this around anymore. I don't want this controlling me
and I know how to stop it. These are very fragile. You know, this is really a glass cannon that is
being constructed. It might be able to scale up and create incredible amounts of efficiency
if it's operating, but we've already seen with things like airtime travel and just-in-time delivery
that our civilization is already losing the ability to maintain the systems. And so the real
question is, can AI race in a way that can develop quick enough to maintain itself, where it
doesn't need humans to do this anymore and it doesn't have to take in to account the concerns of
humans? Because otherwise, there's always the human option of simply destroying AI in its crib
and kind of resetting things, which would itself have disastrous results.
So I don't have any immediate solutions to this and I know this isn't like a super happy episode,
but these are just things that I think about, things that I'm trying to understand, things that
I think are critical to the future of our country and I think are a humanity as a whole and things we
are not thinking enough about in my opinion. So I just want to start the conversation with my own
theories. If you have some of yours, you have some input. By all means, please put it down in the
comments below. I'd love to see what you're thinking about this, but that said, let's go to the
questions of the people. Many youth says the concept of progress acts as a protective mechanism
to shield us from the tears of the future. Frank Herbert, dude, yep, yeah, fantastic. And of course,
Frank Herbert thought a lot about this before anyone else did. Also has the famous line from
Dune about, you know, men turning their thinking over to machines, but then being ruled by other
men who understood how to use those machines. All of these very relevant. Dune is a very relevant
book to today. It's not, it's not just Star Wars Slop, like Dune is actually a pretty important book to me.
Jackson Day says, don't worry. If it does create a productivity boost, governments will just spin
that too. Yeah, they will, right? Which is kind of its own problem. Like the machine continues to
feed itself, it continues to need to expand. There's never enough for it, but that actually just
makes me more concerned, not not less concerned, because that means that even once we do achieve
like this maximalist efficiency from AI, the desire will simply be to make things less human,
to produce more AI centers, to build more and more of our civilization to this other efficient
form of production. We could be seeing the move from mass managerial structures to AI structures,
as like a, and at the same level of civilization change. In the same way that the industrial
revolution changed humanity and how it understands itself, we might be seeing that with AI.
David Dodin says, scaling down administrators is a fool's errand. Many of them are invented
to create ethnic picture network, AI affirmative action admins say urban airport TSA is no better.
Yeah, and again, that's entirely possible. But this is true about AI in general, right? Like one of
the problems with AI is it destroys employment across the board. It'll destroy patronage networks
just as easily as it destroys all these other possible employment opportunities that it's going
to take from human beings. So the automation of ethnic employment networks creates its own
problems that, you know, that they'll have to tackle. But I really don't know how that will
ultimately be spun out. He also says, their concern is the national debt. How can think and
buds deal with a global government debt default? America's remittance schemes still required
to avoid their usually schemes to blow up. If AI replaces the usual, the usually managers in a
healthy way, then that is a win. And again, possible, you know, entirely possible. Like I said,
this is one of the few positive aspects of AI that I've had really explained to me and like,
how will AI benefit humanity? This would be one. Though honestly, I'm still pretty skeptical.
This is ultimately going to be the way it is finally deployed. All right, guys, well, we're going
to go ahead and wrap things up. But I want to thank everybody so much for watching. It's been fantastic
to speak with you. If it's your first time on this channel, you need to go ahead and click subscribe
on YouTube, bell notification, all that stuff. So you know, when we go live, if you want to get
these broadcasts as podcasts, you need to subscribe to our McIntyre show on your favorite podcast
platform. And when you do leave a rating or review, it really helps with the algorithm magic,
which I know is a little ironic after the speech. Thank you everybody for watching. And as always,
I'll talk to you next time.
Auron MacIntyre on Odysee
Auron MacIntyre on Odysee
Auron MacIntyre on Odysee