Loading...
Loading...

In this installment, we dive headlong into the transformative power of Marshall McLuhan's four laws of media, known as the tetrad. We apply them rigorously to the evolution of cognitive prosthetics, from humble calculators to omnipotent computers and now to generative AI. This exploration reveals how the current wave of obsolescence echoes profound historical technological shifts. Yet it also brins on a dramatic reversal that will redefine human purpose, creativity, and existence itself. This reversal is no mere downfall. It stands as the climactic transformation in the Hero's Journey, where the hero, having braved the abyss, returns not just changed but empowered to reshape the world.
Read more at: ReadMultiplex.com
Welcome back to the deep dive. Today, we are, um, we're not just looking at a technology.
No, not at all. Not in the way people usually think about it. We're not looking at the latest
gadget or a piece of software code or even the stock market, which I know is where everyone's
head is at right now. Right. And we're deliberately looking away from that. This is something much
bigger. We are looking at a map. And it's a map for a territory that, well, quite frankly,
it doesn't even exist yet. Or maybe a better way to put it is that it's forming right under our
feet. Exactly. Like lava cooling into rock. It's happening in real time. It is a map, but I'd
almost call it a survival guide. We're talking about something very specific today, a concept called
the interregnum. The interregnum. Okay. I have to admit that word has been stuck in my head since I
read the brief. It sounds ominous. It should. It sounds like something out of a history book about,
you know, the fall of Rome or something. In a way, it is historically an interregnum is that
chaotic messy period between the reign of one sovereign and the next. The old king is dead.
The new one isn't crowned yet. So all the rules are suspended. Exactly. Law and order can break down
uncertainty rules. But in our context, we aren't talking about kings or queens. We're talking about
the, um, the dominant operating system of human civilization itself. And the sources for this,
they give a very specific timeline, don't they? This isn't some vague sometime in the future.
It's remarkably specific. We're looking at a period that spans from late 2025, which we've just
passed all the way to mid 2039. So we're in it right now. We are walking across a bridge.
And it's a chaotic, swaying rope bridge that spans this massive canyon between the era of
labor scarcity, which is everything we've ever known and an era of, well, boundless plenitude.
Boundless plenitude. That sounds utopia. It sounds like Star Trek. But I have a feeling the journey
there is going to be, let's just say bumpy feels like a massive understatement. Oh, bumpy doesn't
even begin to scratch the surface. This is a kind of shift that demands courage. It demands vision
and maybe most importantly, it demands a guide. You can't navigate this kind of terrain without
someone who's already seen the pattern. You'd be walking blind. And that brings us to the
architect of today's deep dive. We're exploring the latest and I would argue the most profound
installment yet in the 5,000 days series by Brian Robelle over readmultiplex.com. And I have to say
usually when we talk about tech analysts, you think of, you know, guys in fleecefests looking at
stock charts. Right. Or reading Python code commits. Exactly. But Romela feels different. The way
he writes, it's not really tech analysis, is it? To call Brian Romela tech analyst is like calling
a cartographer a guy who draws lines. It just misses the entire point. He's operating more like a
modern philosophy. I like that. He sees these deep patterns where everyone else just sees noise.
He isn't just looking at what the AI models can do today. You know, can they write a poem,
can they code a website? He's looking at what they do to us. Their souls. To our souls to the human
nervous system, that's the level he's operating on. And that is the hook for today. We're going to
frame this whole technological shift, this 5,000 day period we're living through, not as a tech trend,
but as a hero's journey. Joseph Campbell's monomyth, the oldest story we have. Right. So we have the
hero's journey. We have this 5,000 day timeline. But we also have kind of a secret weapon in this
analysis. A figure from the past, who Romela argues is the absolute key to understanding our future.
The Oracle. The Oracle, a tech profit from the 1960s, Marshall McCluhen. The medium is the message
guy. That's him. And I remember reading him in college and just thinking, I have absolutely
no idea what this man is saying. It felt like reading poetry from another planet. Man, you are not
alone. But Romela's whole argument is that McCluhen's work, which was so often dismissed or misunderstood
in his own time, provides the perfect cipher. It's the code breaking tool for what generative AI is
doing to us right now. So he's weaving these two threads together. The ancient structure of the
hero's journey and the electric almost mystical prophecy of McCluhen. Okay, so our mission today is
threefold. First, we need to understand where we are on this map by looking back at the 5,000 day
series so far. The road so far. Exactly. Second, we're going to really re-examine AI, not as a tool,
but as an extension of the human nervous system. I'm a collian idea. A prosthetic for the mind.
And third, we're going to use a specific tool of McCluhen's, something called the Tetrad,
to predict where this is all going. Specifically, a concept you called reversal.
And trust me, the reversal is where things get really, really wild. It's the part of the map that says,
here'd be dragons. Okay, I'm ready. Let's start with the road so far. This 5,000 day series,
it's a huge body of work at Reed Multiplex. For anyone just joining this journey, can you
contextualize it for us? Where does it start? So this journey, according to Ronelle,
ignited in December 2025, that was the starting gun. And the series, it doesn't just track
technology. It creates a narrative arc that mirrors the stages of accepting a massive
life-altering change. So it's not a timeline of then GPT-5 came out, then GPT-6 came out.
No, not at all. It's a timeline of human emotion. It started the part 1, the awakening.
Right. Correct. Part 1 was that moment. The cold water to the face.
Think back to when you first really got it. When you saw AI do something that you genuinely
thought only a human could do. For me, it was watching it reason through a logic puzzle.
That was my moment. See? Everyone has one. It's the realization that for a student graduating
in 2026, the world of 2040, when they're in the prime of their career, will be completely
unrecognizable. The job they're training for, it's gone. Or it's been transformed into something
alien. It's the realization that we're moving into what Ronelle calls an age of abundance,
where AI and robotics just dissolve traditional labor. Which sounds great on paper. The end of work.
Everyone signs up. But then you immediately hit part 2. The grief. And I have to say,
reading through the notes, this part hit me hard. Yeah. Because usually tech folks are so optimistic.
They're all about, look at the efficiency gains. And role just puts the brakes on and says,
no, wait, this hurts. Can we dig into that? Because I think a lot of people feel that, but they don't
have a name for it. Absolutely. This is where Ronelle brings in Elizabeth Kubler Ross, you know,
the five stages of grief, denial, anger, bargaining, depression, acceptance. Yeah. He argues that we,
as a society, are collectively grieving the old world. We're grieving the idea that I am valuable
because I can do this specific wrote task. So give me an example. Imagine you're a translator.
You've spent 20 years mastering French. You've studied the idioms, the nuance, the history.
It's your craft. It is your identity. Right. And then in the span of maybe 18 months,
a machine appears that can do it instantly for free. And this is the absolute kicker,
sometimes better than you. That isn't just a market shift. No, that's a death in the family.
That's a death in the family. You are grieving the death of your own utility. And Ronelle is one of
the very few voices out there saying, it's okay to grieve. In fact, it's necessary. If you stay in
denial, you get crushed. Then he brings in literature. Part three references, Kurt Vonnegut.
Player piano. It's such a brilliant and chilling connection. Vonnegut wrote that book back in 1952.
Wow. That early. Yeah. And it's about a future where machines do everything. And humans,
even the engineers, the very people who built the machines feel this crushing sense of purpose
lessness. This is the prophecy phase of the series. It's a warning. A huge one. Ronelle is basically
saying, look, if we don't find a new source of purpose, this age of abundance becomes a dystopia.
We become the people in that bar and player piano, just drinking away our own obsolescence.
So we have awakening, grief, prophecy. Then we get to what you call the really painful part,
parts four and five, reframing and the pain. This is what he calls the de-skilling phase.
And this is so crucial for you to understand, personally, as AI takes over wrote tasks,
writing basic emails, coding simple scripts, analyzing data, we start to lose those skills.
Our own abilities, atrophy. It's like how I can't remember anyone's phone number anymore
because my phone does it for me. Exactly that. But on a massive professional scale.
Right. But imagine that applying to your ability to structure a clear paragraph,
or your ability to diagnose a problem from first principles, it's painful.
It feels like losing a limb you didn't know you had.
Morremble calls this the crucible. You have to face the fact that you are becoming less competent
in the old ways, which leads directly to part six, the abyss, the dark night of the soul.
This sounds like the lowest point in the hero's journey, the belly of the whale.
It is the belly of the whale. It's the full-blown identity crisis. When the old self,
the self that was defined by I am a good writer, or I am a good programmer, dies,
who are you if a machine can do your job faster, cheaper, and better?
That's the abyss. And Romail argues that all of us collectively are staring into that abyss
right now, whether we admit it or not. But the hero doesn't stain the belly of the whale.
The hero has to climb out. They do. And that's where part seven through ten come in.
This is the reconstruction. And here he brings in another author,
Ion M. Banks, and his culture series, specifically a book called Consider Flea Bus.
I haven't read that one. What's the connection there?
The culture series depicts a far future civilization where AIs, which they call mines,
are basically God-like. They run everything. Humans don't need to work. So what did they do?
What did they do? They play incredibly complex games. They make art. They travel the galaxy.
They find meaning and being, not just in doing. It's about finding purpose beyond simple utility.
And part nine, this is my favorite concept in the whole series, the artisans awakening.
It's the light at the end of the tunnel. It's this idea of reclaiming craft in a world of
machine perfection. If the machine handles all the average work, the midwit work, as he sometimes
calls it, then the human has to move to the edges. To the unique, the personal, the flawed.
Yes, the flaw becomes the feature. If I buy a table from IKEA, it's perfect and cheap. But if I
buy a table from a carpenter down the street, I'm paying for the fact that a human touched it.
I'm paying for the story. We start to value the human touch precisely because it's become scarce.
Exactly. And that brings us to the final part so far, part 10. Everyone is doing it.
This is the tipping point where we stop talking about AIs, this weird new novelty,
and just quietly embrace it. It becomes the water we swim in.
It's really fascinating how we weaves this all together. You've got Vonnegut, Kubler Ross,
I&M Banks. He's not looking at code. He is, like you said, looking at the soul.
That's the key differentiator. Most AI thinkers are asking, will the model hallucinate?
Brian Romell is asking, will the human hallucinate a new identity? It's a totally different level of
analysis. So that's the map. That's the 5,000 days. But to navigate it, Romell says, we need an
oracle. We need Marshall McCluen. Now, I know the name. He was the big media guy in the 60s.
But I feel like he's one of those figures. People quote, but don't actually know. Who was he, really?
Marshall McCluen is one of the most fascinating and frankly bizarre intellectual figures of the 20th
century, born in Edmonton, Alberta in 1911, a Canadian. And his background is really important
here because he was not an engineer. He was not a scientist. But was he then a sociologist?
A student of English literature. He wrote his doctoral thesis on Thomas Nash.
He loved James Joyce and T.S. Eliot. And he was deeply influenced by his mother, who was an
electrocutionist in an actress. So he had a flair for the dramatic. He understood performance. He
understood that how you say something fundamentally changes what you say. And here's a detail that
people often miss, which Romell thinks is absolutely crucial. McCluen converted to Roman Catholicism
in 1937. Really? I would not have guessed that. How does that fit in with media theory?
It gave him a sacramental view of the world, a spiritual, almost mystical lens. He didn't just see
technology as metal and wires. He saw it as something that directly altered the human spirit.
He once compared satellite technology to the star of Bethlehem. Wow. He believed that electric
media was knitting all of humanity back together into a single body. What he and theological terms
would see as the mystical body of Christ. That is way more mystical than I expected from a media
theorist. It is. And it's why he was so misunderstood. In the 60s, he became a pot culture icon.
He was on magazine covers. He even had that famous cameo in Woody Allen's Annie Hall.
Right. He steps out from behind a movie poster to correct some guy pontificating about his work.
You know, nothing of my work. It's classic. But the academics, the serious sociologists,
they despised him. They called him a determinist and obscurantist. They said his ideas were
just abstract wordplay. The crazy professor trope. Exactly. But Romell's take is that he wasn't
crazy. He was just incredibly early. He was trying to describe the internet in 1962. Of course,
he sounded crazy. And there's a tragic human element to his story too. The brain tumor.
A benign tumor, but the size of a tennis ball removed in the late 60s. Dance ball. Good Lord.
Yes. And after the surgery, he struggled with a kind of hypersensitivity to noise. He became
even more aware of the total environment. And then in 1979, he suffered a massive stroke
that silenced him. He lost the ability to speak. Aphasia. The man who explained communication
to the entire world lost his own ability to communicate. He died about a year later.
That's heartbreaking. But his ideas, obviously, didn't die. So let's get into the big one.
The medium is the message. We hear it all the time. But let's be real. What does it actually mean?
Because my first instinct is, no, the message is the message. It's the most famous
misquote in history because people think it's just a clever slogan. It's actually a profound warning.
It means that the form of a technology, the medium itself, changes us and our society more than
the content it carries. Okay, give me the classic concrete example. The electric light bulb.
That was McLeven's favorite. A light bulb has zero content. It doesn't run a TV show,
it doesn't have an article inside it. It's pure information, pure light. But think about what
it did to society. It let us see in the dark. It did so much more. It allowed us to perform
surgery at night. It allowed for night shifts and factories. It created nightlife. It allowed us
to play baseball games at 10 p.m. It completely restructured how society organizes time itself.
It turned night into an extension of day. So the medium, the light bulb rewired society.
Completely, regardless of what we use that light bulb. Yes, the message of the light bulb is we
now control the 24-hour day. The content, whether you're reading a book or performing surgery,
is almost irrelevant to that massive structural change. Okay, so how does that apply to AI?
Because AI seems to be all about the content. It writes the essay. It makes the picture.
That's the trap we fall into. We focus on the essay at rates. But Ramall, using McLeven,
says, the message of AI is that thought is no longer a solitary activity. It's a dialogue.
The medium itself is rewiring our fundamental expectations of effort, speed, and creativity.
The message is you are never thinking alone again. You are never thinking alone again,
and that change is the very definition of thinking. Wow. Okay. That connects to another McLeven
concept where Ramall highlights hot versus cool media. This one always trips people up.
Yeah, because the words don't mean what we think they mean. Hot doesn't mean exciting and cool.
It doesn't mean boring. Right. It's almost the opposite. A hot medium is high definition.
It's saturated with data. It gives you all the information. Think of a movie in a theater.
It dominates your senses. The sound is perfect. The image is crisp. You just sit there and
absorb it. You are passive. You're just a sponge. Right. And cool media is low definition.
It gives you very little information, which means you have to do the work to fill in the gaps.
Like a comic book. A comic book is a perfect example. You have to imagine the movement between
the panels. A blurry old TV signal is cool. Your brain is working to interpret the image.
A telephone conversation is cool because you have to imagine the other person's facial expressions.
It demands participation. And here's where I get stuck. Rome says,
generative AI is a cool medium. But when I use it, it gives me pages of text. It gives me
high resolution images. That feels hot to me. It feels like a lot of data.
But look at the interaction. Does it just broadcast it you like a radio?
No. Of course not. I have to type to it. I have to give it a prompt. I have to correct it.
Exactly. It requires you. It demands a prompt. It gives you a first draft.
And you have to refine it. You have to check it for hallucinations.
You are actively participating in the creation of the final message.
I'm co-creating it. You are co-creating. It's an extension of you. Not a broadcast at you.
It demands high participation. That's what makes it cool.
And because it's cool, it's far more addictive and deeply transforming than hot media.
When you watch a movie, you can walk away. When you use AI, you're merging with it.
That explains the fatigue, doesn't it? That feeling of being drained.
Yes. That's the cognitive science. Why is Zoom fatigue a real thing?
Because video calls are a cool medium. The audio is slightly out of sync. The eye contact
isn't real. Your brain is working overtime to fill in the missing social cues.
And AI is the same.
AI is the same. You are constantly monitoring, correcting, prompting.
It is exhausting your nervous system in a way that just passively watching Netflix never ever could.
You are doing heavy, cognitive lifting to complete the pattern.
Okay. That makes so much sense. So we have the guide, Rommel.
We have the map of the 5,000 days. We have the Oracle McCluhan.
Now we need the tool, the Tetrad. This seems to be the core of Rommel's latest analysis.
What is the Tetrad?
The Tetrad was developed by Marshall and his son, Eric McCluhan, late in his life.
They wanted to create a science of media. They didn't want to just guess what technology did.
They wanted to find a law like gravity.
The four laws of media.
Right. It's a heuristic tool, which is just a fancy way of saying it's a way of asking questions.
It consists of four questions that you have to ask simultaneously about any human artifact or
technology to predict its entire life cycle.
And what are the four questions?
One, what does it enhance?
Two, what does it obsolesce?
Three, what does it retrieve?
And four, the big one.
What does it reverse into when pushed to its limit?
Okay. Before we apply this to AI, let's practice it.
Rommel uses the car example in the notes.
I think that's a perfect way to visualize it.
Perfect.
So take the automobile.
Quadrant one, what's it enhance?
Mobility, speed, personal freedom.
I can go anywhere anytime.
It enhances privacy and speed exactly.
Quadrant two, what does it obsolesce?
The horse, the horse and buggy, cobblestone streets,
and walking to a large extent and passenger trains for a while.
It makes the old way of getting around obsolete.
Quadrant three, what does it retrieve?
This is always the most poetic one.
The notes say the night in armor.
Think about it, a driver as a solitary figure encased in a metal shell
charging down the highway on a personal quest.
It brings back that feeling of the armored cavalier.
I have never thought of my Honda Civic as a suit of armor,
but I kind of like it.
And finally, Quadrant four, what does it reverse into?
When you push the car to its absolute limit,
when everyone has a car and everyone is on the road at the same time,
what happened?
Traffic, total gridlock.
The car, which was designed to move you fast,
the enhancement flips into its opposite,
becomes a parking lot, stops moving entirely,
mobility becomes total paralysis.
The technology reverses into its opposite.
That is elegantly terrifying.
It predicts the breaking point of any technology.
Every single one.
The internet enhances connection,
but when pushed to its limit,
it reverses into isolation.
Everyone alone in the room, staring at a screen.
So Romela is applying this powerful tool
to what he calls cognitive prosthetics,
tracing the line from calculators in the 70s to computers in the 90s,
all the way to generative AI today.
He's using the tetrad to see the future of our minds.
So let's do it.
Let's apply the tetrad to generative AI.
This is the heart of it.
Quadrant one, what does AI enhance?
It enhances the speed, the scale,
and the breadth of human thought.
Think back to the calculator.
It freed engineers from doing long division by hand.
It enhances their ability to calculate the stresses on a bridge.
AI does that, but for concepts.
It's the boom in the hero's journey.
It's the magic sword that lets the hero fight the dragon.
That's a perfect analogy.
It liberates time.
If I don't have to spend 10 hours researching the history
of 18th century French pottery for a novel I'm writing,
I can spend that time weaving a better plot.
It enhances synthesis.
It allows a single person to do the creative work
of what used to be a 10 person department.
Okay, that's the upside.
But with every enhancement comes a loss.
Quadrant two, what does AI obsolete?
This is the painful part.
This is the de-skilling we talked about earlier.
It obsolete's rope memorization, of course.
It obsolete's manual calculation.
But much more dangerously, Ramella argues,
it threatens what we call solitary deep work.
The idea of the lone genius in a quiet room.
Just thinking.
Yes.
Think of the slide rule holdouts.
In the 1970s, there were brilliant engineers
who refused to use electronic calculators.
They took pride in their manual math skills.
They said, if you use a machine,
you don't really understand the numbers.
What happened to them?
They became irrelevant.
They were obsolete.
Not because they were wrong about the math,
but because they were too slow.
Rommel's warning is that if you refuse to adapt,
if you refuse to use this new cognitive prosthetic,
you face professional your relevance.
But, hold on, is that really a fair comparison?
I mean, we stopped using slide rules
and society didn't lose the ability to do math.
In fact, we just did more complex math.
Why is this different?
That's the critical question.
With the calculator, we outsource calculation.
With AI, we are in danger of outsourcing reasoning.
We're outsourcing the logical structuring itself.
If the machine constructs the argument for you,
do you forget how to construct a coherent argument on your own?
That's the potential cultural loss.
That's chilling.
Okay, let's move to the weird one.
Quadrant three.
What does AI retrieve?
Newtech almost always brings back something ancient.
What is AI bringing back from the dead?
This is where Rommel's analysis is just brilliant.
He argues that AI retrieves the oral tradition.
It retrieves secratic dialogue.
Because we're talking to it.
Yes.
Think about the last 500 years.
Since the printing press, how have we learned?
We read books and silence.
We sat in libraries.
Learning was a visual and solitary act.
True.
But before the printing press, for thousands of years,
how did humans learn?
By talking.
By telling stories around a fire.
By arguing with a mentor in the Agora.
AI brings that back.
We aren't typing rigid code syntax.
We aren't moving punch cards.
We are conversing.
Hey, explain this concept to me like I'm five.
What about this angle?
Summarize this story for me.
It's intuitive.
It's pattern matching.
It's pre-literate intuition.
Rommel connects this directly back to the dark night
of the soul from part six.
This retrieval is the ally that appears in the abyss.
It helps us survive the transition
by reconnecting us to something very old and very human.
Storytelling and dialogue.
We're finally talking to our computers
like Captain Kirk did on the Enterprise.
So it enhances speed.
It obsulates, wrote work.
And it retrieves the oral tradition.
Oh no.
Now we get to the scary part.
The final quadrant, the reversal.
The climax of the hero's journey,
where the magic sword turns against you.
And Rommel identifies not one,
but five potential reversals for the 2030 to 2050 horizon.
Let's go through them, because this
is where the map tells us to watch out.
Reversal number one, cognitive traffic jam and aphasia.
This is the gridlock scenario, just like with the car.
The enhancement of thought becomes the paralysis of thought.
Imagine a world maybe in 2038 where we use AI for everything,
every email, every decision, every creative thought,
is mediated through a model.
We're all enhanced, we're all super smart.
Sounds good so far.
But what happens if the power goes out?
What happens if there's a massive solar flare?
Or a government regulation suddenly locks down the model?
Freeze, we can't function.
We experience aphasia of thought.
Aphasia is that medical condition
where you lose the ability to understand or express speech
because of brain damage.
Rommel A is predicting a potential mental health crisis
where people discover their inner voice has gone silent.
That's, I'm trying to actually picture that.
Think of it like a wheelchair.
If you have perfectly good legs,
but you decide to sit in a motorized wheelchair
for five years because it's faster and easier,
what happens to your legs?
Very atrophy.
I literally wouldn't be able to walk.
Exactly, you would have walking aphasia.
Now apply that same principle to your brain.
If you never have to struggle to find the right word,
if you never have to structure a logical chain from scratch,
those neural pathways wither and die,
you reach for the prosthetic,
and if it's not there, you fall down,
you literally cannot think.
That is the most terrifying thing I've heard all year.
It's not that the robots kill us in a war
is that we forget how to live without them.
And that leads directly into the second reversal.
If AI makes thought cheap, easy, and instant,
what becomes valuable?
Reversal number two, friction as the new luxury.
Friction, you mean doing things the hard way?
Precisely, think about it.
Today, a luxury experience is seamless and frictionless.
You tap a button, food or bives,
you tap a button or car arrives.
But in the age of abundance, ease becomes cheap.
Everyone has ease, so effort becomes the new luxury.
Okay, so what does that actually look like in practice?
Romail predicts that zero AI will become a prestige marker.
Imagine a law firm in 2035 that advertises.
No algorithms used.
All our briefs are based on 100% human only reasoning.
Imagine checking into a hotel that costs $5,000 a night
because there are no screens, no sensors.
And the concierge is a human being writing notes by hand.
It's a signal.
It proves you spent the time.
You spent the cognitive energy.
It ties directly back to the artisan's awakening.
The friction of doing it yourself, the flaws, the time,
the struggle, becomes the ultimate status symbol.
In 2035, the richest people might be the ones
who consume the least amount of AI.
The poor will be constantly augmented.
The rich will be willfully luxuriously unaugmented.
That is a complete flip.
Today, we assume rich people get the best tech first.
Tomorrow, they might pay a fortune to avoid it.
Exactly.
And that flows into reversal number three.
If AI models are trained on the statistical average
of all human knowledge, which they are,
they tend to output what Romail calls midwit mush.
Midwit mush, I love that term.
It's safe, average, homogenized content.
So the flip, the reversal is radical idiosyncrasy.
Uniformity births rebellious.
Yes, because the AI is so smooth and perfect and average,
humans will revolt by creating things
the AI can't process, developing private languages,
unpromptable art, slang that evolves so fast
the models can't keep up, analog mysticism,
and underground culture.
It's the punk rock of 2035.
Secret societies that avail their notes from the models.
If the sunlight of AI is everywhere,
the most interesting humans will move into the shadows
to create something real and strange.
Punk rock against the algorithm.
I love it.
Okay, reversal number four.
This one sounds almost spiritual.
The servant becomes the atmosphere.
The tool disappears.
Right now, AI is a chatbot.
It's a thing on a screen.
You go to chat GPT, but when it's truly ambient
in your glasses, in your walls, whisper it in your ear,
it stops being a tool you use.
It becomes the environment you inhabit,
like gravity or the air you breathe.
And what does that do to us?
To our sense of self.
It dissolves the me.
The identity crisis reaches its absolute peak.
We risk becoming nodes in a planetary mind.
McLuhan talked about retrivalization.
Well, this is the ultimate tribe.
If I can access all human knowledge instantly,
just by thinking it, where do I end in the network begin?
We become the board.
Or we become a beautiful collective consciousness.
It's not necessarily evil,
but it is a fundamental change
in what it means to be an individual human being.
If your thoughts are being auto-completed by the cloud,
are they really your thoughts?
Which I imagine would drive some people
to the final reversal.
Number five, the great unplugged renaissance.
The reaction to the atmosphere.
When instant fulfillment hollows out all meaning,
you see a mass voluntary return to friction.
Rememble describes a future with monasteries of deep work,
dumb phone towns, communities that ban the tech,
not because there are let-ites who hate progress,
but because they are preservationists
of the human soul.
People paying a premium to not be augmented.
Exactly.
It's the ultimate luxury good of the late 2030s.
Silence, solitude.
The ability to be completely alone inside your own head
without the chatter of the entire world.
These reversals, they're heavy,
aphasia, friction as luxury,
the atmosphere, the unplug,
it really feels like we're heading toward a fundamental split
in the human species.
And that brings us right back to the end
of the hero's journey, the return.
So we've gone through the call to adventure.
We've been in the abyss and we've seen the drag
in the reversal.
Now the hero has to come back to the ordinary world
with the elixir.
What is the elixir in Rommel's view?
How do we survive this?
The elixir is a two-word concept,
volitional mastery.
Volitional mastery.
Okay, explain that.
That sounds like a Jedi term.
It basically means you have to choose.
You cannot be a passive victim of this tide.
You have to master the AI now.
You have to use it, understand it,
integrate it into your workflow.
But, and this is the absolute crucial part.
You have to do it with the conscious intention
of preserving your humanity later.
You learn to use the sword
so the sword doesn't end up using you.
Perfectly said.
The elixir is the knowledge of how and when to switch it off.
It's about maintaining your human only reasoning capability
while still benefiting from the abundance.
You use the prosthetic to run faster,
but you make damn sure you still run on your own legs
every single day so they don't atrophy.
And this is where we see Brian Rommel's role in all this.
He isn't just reporting on the weather.
He's trying to build the arc for us.
He is the guide.
He's the embodiment of this journey.
He differentiates himself from every other AI pundit
because they're looking at stock prices
and he's looking at the human nervous system.
He understands that AI is an extension of us,
not just a product.
He's weaving this 5,000 days narrative
to prepare you the listener.
The call to adventure is for us.
Yes, we are in the interregnum.
The old world is dead.
The new world is not yet born.
Those who resist, they'll be victims.
They will get washed away by the deskilling.
They'll suffer the grief
without ever reaching the reconstruction.
But those who master it,
those who use this tetrad as a map.
They forge the legends.
They become the artisans.
They do.
So what does this all mean for the listener right now today?
We aren't in 2039 yet.
We are on that swaying rope bridge.
What do we do?
It means you need to look at your tools differently.
Don't just ask, how can this make me faster?
Ask the tetrad questions.
What is this tool retrieving?
What is this tool ops lessing in me?
Am I losing a skill that I actually want to keep?
And actively prepare for the reversal.
Exactly.
Build your own personal friction.
Write a letter by hand once a week.
Read a physical book.
Go for a walk without your phone.
Keep your unaugmented cognitive muscle strong.
Because when the reversal comes,
when the gridlock hits, or the atmosphere becomes too thick,
those muscles will be the only thing
that saves your sense of self.
That is such a powerful image.
The artisan awakening isn't about woodworking.
It's about.
It's about being an artisan of the mind.
That's the mission.
We've used McCluen's tetrad to map the trajectory of AI
from a tool to an environment.
We've seen the clear danger of cognitive aphasia.
But we've also seen the opportunity,
the chance to become something new,
something more than just a worker be.
I want to leave our listeners with one final provocative
thought, a specific image from that reversal.
Imagine a future maybe 10 or 12 years from now
where human only thought is a luxury good.
Where the ability to think a complex problem through
from start to finish without asking an AI for help
is the rarest and most valuable skill on the planet.
Are you building that capacity in yourself now?
Or are you letting the machine
slowly, comfortably atrophy your mind?
That is the central question of the 5,000 days.
If you want to follow this journey,
and you absolutely should, because you're going to need this guide,
you need to go to readmultiplex.com.
Follow the 5,000 day series.
Brian Romell is the guide you need.
He's the one seeing these patterns.
The age of abundance is coming, whether or like it or not.
But it's up to you to decide if you'll be a victim
of it or a master of it.
Thanks for diving deep with us we'll see you in the future.
Stay human.
Stay human.
Stay human.
Stay human.
Stay human.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.
