Loading...
Loading...

The Inevitable Ascent of AI: Echoes of Prediction in the 5000 Days Framework
Moments of collective realization often arrive with a jolt. This is prompting widespread discussion and introspection, Matt Shumer's recent article, "Something Big Is Happening," published on his personal site, captures precisely such a moment. Shumer, an AI entrepreneur with extensive experience in building startups and investing in the field, outlines a transformative shift underway, driven by exponential advancements in AI models. He draws parallels to the societal upheaval of the COVID-19 pandemic, emphasizing that AI is not a distant future threat but an immediate disruptor already reshaping jobs, economies, and daily life. With recent releases like OpenAI's GPT-5.3 Codex and Anthropic's Opus 4.6, Shumer highlights AI's newfound capabilities in judgment, taste, and self-improvement, warning of an impending "intelligence explosion" that could render vast swaths of cognitive work obsolete within one to five years. His call to action is urgent: experiment with AI tools daily, build financial resilience, and rethink education and careers to adapt to this irreversible change.
This piece has resonated profoundly, garnering over 40 million views on X and other platforms since its posting, a testament to its timeliness and the growing public awareness of AI's implications. Yet, for those familiar with Brian Roemmele's extensive body of work, Shumer's observations arrive not as a surprise but as a confirmation of long-foretold trends. Roemmele, a futurist and founder of ReadMultiplex.com, has been chronicling the ascent of AI and its societal impacts for decades. His "5000 Days" series, launched on December 24, 2025, provides a structured roadmap for navigating what he terms the "Abundance Interregnum"—a transitional period of approximately 13.7 years (roughly 5000 days) leading to an era where human labor decouples from necessity, ushering in unprecedented plenitude. This series, now spanning multiple installments, frames the current AI developments as entirely expected, aligning with predictions that have been articulated well before the latest model releases. In essence, Roemmele's work carries an implicit "I told you so," underscoring that the disruptions Shumer describes have been on the horizon for years, if only more people had heeded the signals.
Read more at: ReadMultiplex.com
We'll come back to the deep dive.
I have to be honest with you, right off the bat,
today's stack of research, it just feels different.
It really does.
Usually we're looking at a trend or a new piece of tech,
maybe a historical event, but today.
Yeah.
Today we're analyzing a vibration.
A vibration, that's a good word for it.
There's a distinct shift in the atmosphere, isn't there?
Yeah.
We've spent years, and I mean years,
in this sort of theoretical phase of artificial intelligence,
we've been talking about what might happen, what could happen.
The someday phase.
Exactly, the someday phase.
Yeah.
But looking at the sources for today,
it feels like we've just violently crossed a threshold.
We're not in theory anymore.
We've slammed into tangible reality.
Exactly.
And the catalyst for this deep dive,
the thing that actually triggered us to sit down and record this,
is a viral article that has absolutely exploded across the internet.
It's a piece by Matt Schumer title,
something big is happening.
And when I say exploded,
I'm not just talking about a niche tech bubble.
No, this is different.
This is over 40 million views on X.
I mean, it is everywhere.
It's being shared in tech circles, sure.
But I'm seeing it from artists, accountants, teachers,
people who usually just scroll right past this stuff.
It's across the membrane.
It has.
Which is usually the primary sign that the signals become too loud to ignore.
What Schumer captures in this piece is what I'd call a jolt.
A jolt, yes.
It's a jolt of collective realizations
that specific identifiable moment
where all this fringe tech talk we've been following
just crashes violently into mainstream consciousness.
Jolt is the perfect word for it.
Reading it, I felt this physical knot in my stomach.
Not necessarily fear, there are some of that in there,
but vertigo.
Vertigo's a great way to put it.
It feels like everyone is suddenly waking up to something huge all at once.
Schumer frames it as an immediate disruptor.
He's not saying this is five years away.
He's saying it's happening right now.
Today, precisely.
But here's the fascinating twist we are going to take for this deep dive.
Schumer is ringing the alarm bell.
And he is absolutely right to do so.
Yeah.
We need to hear it.
But if we only listen to the alarm, we just panic.
We run around in circles, we map, we know where we are,
we need to know where we're going.
Okay, let's unpack that.
Because while Schumer is the trigger,
he's not the map maker here.
This is where we pivot to our second
and honestly are more substantial source for today.
Correct.
If Schumer is the lookout in the crow's nest,
shouting iceberg ahead, or maybe Lanto, depending on your perspective,
the navigator who drew the charts for this exact ocean years ago,
is Brian Rerello.
Brian Rerello?
He's a futurist, the founder of ReadMultiplex.com,
and he has been chronicling the Ascent of AI since the 1980s.
I mean, this is his life's work.
So this deep dive is really a special ReadMultiplex deep dive
because we are going to look at Schumer's viral moment
through the lens of Ramul's framework.
And specifically his massive body of work
called the 5,000 day series.
Which, and this is the crazy part,
he launched way back in December 2025.
The premise of our discussion today is essentially this.
Matt Schumer's viral warning.
It's exactly what Brian Rerello
will predicted would happen right now, almost of the day.
That's uncanny.
It is.
So our mission is to analyze Schumer's jolt
through the framework of Ramul's 5,000 days.
And this is the really interesting part.
The hero's journey.
I love that.
So we aren't just looking at what is happening at the tech,
the models, the disruption.
We're looking at where we are on the map of history.
It helps quell the panic a bit when you realize someone saw this coming
and actually wrote a syllabus for it.
It structures the chaos.
And we need structure right now more than ever.
Absolutely.
So let's start with the signal itself.
Let's deconstruct Matt Schumer's something big is happening.
What exactly is he saying that got 40 million people
to stop scrolling and pay attention?
Because let's be honest, people scroll past.
AI is coming posts all day long.
All day.
Why did this one stick?
Well, Schumer's core argument resonates
because it's about two things, speed and irreversibility.
And he anchors it with a comparison
that everyone globally has a visceral memory of.
COVID-19.
COVID-19.
Think back to March 2020.
Do you remember that specific week that one week where everything changed?
Vividly.
I remember it almost on a day-by-day basis.
It was like one day the world was normal.
And the next day the NBA was canceled.
Borders were closing.
And everyone was like washing their groceries.
Exactly.
It wasn't a gradual transition where we slowly adopted masks
and remote work over a decade.
It was an immediate system wide shock.
Hard stop.
A hard stop.
Schumer is arguing that the release of these recent AI models
is that same level of disruptor.
He's telling us that the normal world
when we knew last month, it ended yesterday.
And we're just now realizing we're in the new one.
And he's not being vague.
He names names.
He's not talking about some generic AI in the clouds.
No, he's very specific.
He's talking about recent releases like OpenAI's GPT 5.3 Codex
and Anthropics Opus 4.6.
And the distinction he makes is vital.
For a long time, skeptics, and frankly many of us,
dismissed AI as stochastic parents.
Just fancy autocomplete, predicting the next word
based on probability.
Right, just remixing what it's already seen.
No real understanding.
Schumer says we have moved past that paradigm.
He's arguing we are now dealing with systems that possess.
And these are his words, judgment, taste,
and the ability to self-improve.
OK, that self-improve part.
That is where it gets really interesting.
And honestly, a little scary.
That feels like the sci-fi line we weren't supposed
across for another 50 years.
It's the mechanism of what theorists
call the intelligence explosion.
Schumer points out that we are entering a phase
where these models can debug their own code.
Which means they can.
They can accelerate their own development.
So let's break that down in simple terms.
Yeah.
A human, a person, writes a piece of code.
It has a bug.
The human has to sleep, drink coffee,
stare at the screen for hours, maybe ask a colleague.
Right, it's a slow biological process.
And then they fix the bug.
That takes time.
A lot of time.
But now, the AI writes the code.
The AI tests the code.
The AI finds the bug.
The AI fixes the bug.
It does this loop, this entire cycle, in millisecond.
24 hours a day.
Seven days a week.
With no coffee breaks, no weekends, no sleeping.
So it's a recursive loop.
The AI builds a better AI, which immediately
starts building an even better AI.
Exactly.
And that creates an exponential curve.
It's the classic hockey stick graph,
but for intelligence itself.
And this is the disconnect.
Humans adapt linearly.
We learn a new skill.
We practice.
We get better over months and years.
That's a straight line.
It's a straight line.
But AI is improving at a rate that is, well,
vertical compared to our horizontal adaptation.
This is why Schumer predicts that vast swaths of cognitive work
could be rendered obsolete within one to five years.
One to five years.
One to five.
That's nothing.
That is a blink of an eye in career terms.
That's less time than it takes to get a college degree.
So if I'm a junior developer right now,
or a copywriter, or an accountant,
and I'm reading that, I'm panicking.
I'm thinking, I just spent four years
and a hundred thousand dollars on a degree for this.
Which is why his call to action is so aggressive.
He's not saying, keep an eye on this.
He's not saying, learn about AI in your spare time.
He's saying, experiment with AI tools daily.
Not weekly, daily.
Build financial resilience immediately.
Start saving like your job depends on it
because it does.
And rethink your education and career path now.
He's practically screaming at the reader to wake up.
It's the adaptor dice speech.
It's harsh, but maybe it's necessary
if that timeline is even remotely accurate.
In a sense, yes.
But this is where we really need to widen the lens.
Schumer provides the immediate shock.
He's the smelling salts that wake you up.
Okay.
But Brian Romomo provides the context of why you woke up
and more importantly, where you are now that you're awake.
Let's bring in the 5,000 days framework.
Because when Schumer posted this,
I have to imagine Ramell's long-time readers
were just nodding their head saying,
yep, here we go.
It's starting.
They absolutely were.
I mean, Brian Ramell has a bit of an,
I told you so moment right now,
though he's always very gracious about it.
He launched the 5,000 days series
to describe what he calls the abundance interregnum.
Abundance interregnum.
Okay, that sounds incredibly heavy and academic.
Let's unpack that.
What does it actually mean?
An interregnum is a pause or an interruption
between two periods of rule.
Historically, it's like when a king dies
and this successor hasn't been crowned yet.
Okay, a chaotic in between time.
Exactly.
The old laws don't quite apply anymore,
but the new laws aren't written yet.
It's a time of flux, of uncertainty, of chaos.
So we are in the gap between two worlds.
Ramell posits that we are entering
a transitional period of roughly 13.7 years,
which is about 5,000 days.
We're moving from the world we know,
a world defined by scarcity and labor for survival.
The world of the 9 to 5 job.
The world of the 9 to 5, yes.
To a destination he calls unprecedented plenitude.
A world where we don't have to work to survive.
That sounds like Star Trek.
That sounds like a utopia.
The destination is utopian.
The destination is a world where energy, intelligence,
and goods are effectively free or close to it.
But, and this is the big flashing neon,
but we have to get through the 5,000 days first.
Right, the journey.
And Ramell describes this journey
not as a smooth ride on a bullet train,
but as a crucible of change.
So the destination is paradise,
but the journey is hell.
The journey is upheaval.
It's the storm Schumer is feeling right now.
Ramell predicted that before we get to the abundance,
we have to go through the complete deconstruction
of the current economic order.
You can't build the new world on top of the old one
without tearing some things down first.
And the alignment between their predictions is it's spooky.
It's not just AI will be good or AI will be disruptive.
It's the specifics.
It's incredibly precise.
Let's take coding, for example.
Schumer points out that AI writing apps in code
is the tip of the spear.
He says the new models are basically
software engineers in a box.
Ramell, years ago, identified coding proficiency
as the catalyst for broader AI autonomy.
He argued that once AI mastered code,
the language of its own creation,
the language of logic, it would unlock everything out.
So teaching them to code was basically giving them
the keys to the library and the ability
to rewrite all the books, including the ones about themselves.
Essentially, it was the let there be light moment
for machine intelligence, another parallel displacement.
Ramell predicted that 50% or more of white collar jobs,
law, finance, software, media, would be disruptive.
Not blue collar jobs, not at first, the cognitive ones.
The ones we were all told were safe.
The very same.
And he was specific about the irreversibility.
Both Schumer and Ramell agree on this point.
There was no going back in previous economic shifts,
like when manufacturing left a region in the US
or Europe, people were told to retrain.
The mantra for decades was learn to code.
Learn to code was the meme.
It was the solution to everything.
The coal mine closed just learned bython.
The factory shut down.
There's a coding bootcamp for that.
But what happens when the thing you are retraining to do
is also being automated by the machine.
There's nowhere to run.
Exactly.
AI's generality covers the adjacent fields, too.
You can't just pivot from being
a paralegal to a copywriter if the AI is doing both of those
things better than any human.
You can't pivot to coding if the AI is the best
coder in the world.
That's the daunting part.
There's no safe harbor.
In previous industrial revolutions,
you could move from the farm to the factory.
From the factory to the office, you
could move to the safe job.
Now, the safe jobs, the ones that require a college degree,
are the ones being targeted first.
Exactly.
The cognitive jobs are the target.
And this brings us to what Rommel calls the syllabus of change.
He actually breaks down this 5,000 day period
into specific phases or parts.
It's like a curriculum for the end of the world as we know it.
OK, I need to see the syllabus.
Let's walk through it.
Part one is the call to adventure.
Yes.
And he frames this in very human terms.
He focuses heavily on the class of 2026
and on mid-career professionals.
He identifies this moment right now
as the time when traditional career paths begin to just vanish.
So the call to adventure is basically a warning siren.
It is.
It's the realization that the degree you are currently studying
for might be obsolete before you even graduate.
Imagine being a sophomore in computer science right now.
The job you're training for might not
exist in the same way in two years.
That is a terrifying thought for apparent ping tuition
right now.
You're buying a ticket to a destination that doesn't exist anymore.
It is.
And that fear that sheer cognitive dissonance
leads directly to part two of the syllabus, the refusal of the call,
the denial, the psychological denial.
We see this everywhere.
Every day, it's the AI can't do what I do.
It has no soul.
Clients will always want a human touch.
It hallucinates sometimes, so it's useless.
I hear that constantly.
AI can write copy, but it can't understand
the nuance of my brand or AI can't really
negotiate a complex legal contract.
That is the refusal of the call.
It's a defense mechanism.
And it's understandable.
But remote warns that this denial is incredibly dangerous,
because it wastes your most precious resource time.
Time.
While you are arguing about whether the AI has a soul,
the AI is learning to do your job for 1 1,000th of the cost.
OK, so after denial, he moves to part three, which
uses this really cool historical analogy, the player piano.
I honestly hadn't thought about player pianos in years,
but this analogy really, really clicked for me.
This is one of my favorite parts of his entire analysis.
That's so clarifying.
Yeah.
Think about the player piano in the late 19th and early 20th
centuries.
Before that machine existed, if you
wanted to hear music in a saloon or in your home,
what did you need?
A person, a pianist.
A human had to play it.
It was a skill.
It was a job.
A pianist was a necessary economic unit
for music to exist in that space.
And this machine comes along.
You put in a perforated paper roll.
You pump the pedals and the keys moved by themselves.
Exactly.
It replicated the pianist performance perfectly
every single time.
Didn't get drunk.
It didn't ask for a raise.
It completely disrupted the livelihood
of thousands of saloon pianists and musicians.
It turned the act of performance
into a commodity you could just buy and play automatically.
So Ramella is using this as a template
for how AI generates content.
We are facing a player piano moment
for all cognitive output.
For everything, writing, coding, design, analysis,
strategy, it's all becoming a piano roll.
The text I'm writing, the code you generate,
it's just a pattern on a roll now.
Yes.
The machine can play the tune without the musician
being present in the room.
The soul of the musician that ineffable human quality
isn't required for the music to be heard anymore.
That's a powerful and slightly chilling analogy.
And it leads directly to part five of his syllabus,
diskilling.
This is one that really stuck with me.
Because Schumer says use AI daily, which makes total sense.
You have to learn the tool.
Right.
But Ramella adds this layer of nuance,
this warning about the erosion of human expertise.
It's a paradox.
It's a real tight rope walk.
You must use it to stay relevant.
But if you rely on it completely,
you erode your own capabilities.
Ramell warns specifically about the loss of tacit knowledge.
tacit knowledge.
That's the stuff that isn't in the manual, right?
The gut feelings, the intuition.
Exactly.
It's the stuff you can't write down.
It's the experienced surgeon who knows something is wrong
before the monitors show it.
It's the senior engineer who can just look at a piece of code
and know it's going to cause problems,
even if it's technically correct.
It's the why.
It's understanding why a piece of code works.
Not just that it works.
If the AI writes the code and your job is just to check it
for errors, eventually you forget how to write the code
from scratch.
You lose the ability to reason through the problem
from first principles.
It's like using a GPS.
I'm embarrassed to admit this, but I don't really know how
to get anywhere in my own city anymore
because I just follow the blue line on the map.
If the GPS breaks or there's no signal, I'm genuinely lost.
I've offloaded that mental map to the machine.
Precisely.
That is the human and the loop problem on a personal scale.
If we deskill too much, we become helpfuls passengers.
Ramell argues we need to maintain our agency.
We need to be the captain of the ship
using the advanced navigation tools,
not just becoming cargo in the hold.
And then after this process of deskilling comes part six,
the dark night of the soul.
That sounds ominous.
It is, but it's a necessary part of the journey.
This is the emotional core of the transition.
It is the period of existential doubt
when the job is actually gone, or the threat of it being
gone is undeniable.
The identity crisis.
A global identity crisis.
We tie so much of our identity, our self-worth,
to what we do for a living.
I am a lawyer.
I'm a truck driver.
I am a graphic designer.
When that economic value is stripped away,
we are left asking, who am I?
What is my purpose?
That's heavy, but it's so real.
Schumer's article hints at this panic, the urgency,
but Ramell gives it a name.
He validates that feeling of dread.
If I'm not the guy who writes code,
then what is my value to my family, to society?
But, and here's where the framing is absolutely vital.
Ramell doesn't leave us in the dark.
This isn't a doomsday prediction.
He uses Joseph Campbell's monomyth,
the hero's journey, to structure this chaos.
I love this, connecting ancient myth to futuristic tech.
It grounds it, makes it feel human.
So how does the hero's journey map onto the AI revolution?
Well, think about the classic stages.
First, you have the ordinary world.
That's the pre-AI stability we're just now leaving.
The nine to five, the career ladder, the predictable path.
The world where, if you worked hard and follow the rules,
you got promoted.
That world feels like a distant memory already.
Then comes the call to adventure.
That's the release of GPT 5.3.
That's Matt Schumer's article popping up on your screen.
It's that external force that shakes you
out of the ordinary world.
It's the universe knocking on your door, saying,
wake up, things are changing, and you have to change with them.
Followed by the refusal of the call,
which we already talked about, the denial.
It's just a chatbot.
Can't do my job.
Then we enter the phase of tests, allies, and enemies.
This is, I think, where we are right now.
We are trying to find our abundance transition groups,
as Rowan calls them.
We are figuring out who our allies are in this journey.
We're dealing with the initial economic chaos.
We are trying to figure out which tools are friends
and which are just distractions.
And this leads to the central challenge or deal.
The dark night.
That's the peak of the crisis, the loss of the job,
the confusion about identity.
Yes, the moment of greatest darkness.
But in the hero's journey, what comes after the ordeal?
The reward.
The return with the elixir.
When Mel calls this, the artisans awakening.
The artisans awakening, that sounds beautiful.
What does he mean by that?
It is the idea that once we get through the crucible
of the transition, once we successfully decouple human labor
from human survival, we return to creativity and passion.
We do things because we want to, not because we have to pay rent.
So we all become artisans.
We bake bread, we write code, we paint,
we solve physics problems, just for the joy of it.
For the craft.
That is the elixir.
That is the hope at the end of the 5,000 days.
Schumer's article is a warning about the storm.
Ramell's framework is the promise of the beautiful,
calm, creative day that comes after the storm has passed.
That perspective shift is huge.
It really is.
It turns, oh my god, I'm going to lose my job into,
I am being called on an adventure.
And I'm going to have to go through fire to become something new.
Exactly.
It gives the suffering meaning.
It's not just random economic destruction.
It's a narrative arc.
We are the heroes in this story, not the victims of it.
Now, navigating that arc, that journey
requires dealing with some serious, deep emotions.
And Ramell actually brings another framework for this, right?
He connects it to Elizabeth Kubler Ross, the five stages of grief.
He does.
Because the death of a career or the death
of an entire mode of economic existence is a death.
It's a loss.
And we have to grieve the old world
before we can fully embrace the new one.
You can't just skip to the acceptance part.
So let's go through them.
Stage one denial, we've talked about that a lot.
It's just hype.
Remember crypto, remember the metaverse?
They all crashed.
AI will fail too.
That is pure denial.
Schumer's urgent, almost frantic tone
is designed specifically to try and snap people out of this stage.
OK, stage two, anger.
Band the bots.
Sue the tech companies.
This is theft of artists's work.
We see this with the lawsuits, the regulatory outcry.
It's a natural and understandable reaction
to a perceived threat.
People feel like something is being stolen from them.
They're livelihood, they're craft.
Stage three is bargaining.
This one feels a little more subtle.
What does bargaining look like in this context?
Bargaining is the frantic, panicked upskilling.
It's the person who thinks, OK, if I just
learn the specific prompt engineering technique,
I'll be safe.
Or if I just pivot my business into this super-specific niche,
the AI won't find me there.
It's trying to make a deal with the inevitable.
Yes.
It's trying to find the one magic trick that will make you immune.
But you're saying that's probably futile.
In many cases, yes.
Because the AI is a general intelligence,
it learns the niche faster than you can pivot.
Bargaining is like trying to make a deal with a title wave.
You might build a small wall and buy yourself a few months,
but the wave is still coming.
Then comes depression.
This is a dark knot of the soul.
It's the loss of purpose.
It's the artist thinking, why bother learning
to draw for 10 years if the model can generate a masterpiece
in 10 seconds?
It's the translator thinking, why bother learning
French if this earbud translates it perfectly in real time?
This is the danger zone, the point of despair.
And this is where Romelle suggests tools like Victor
Frankl's work.
Exactly.
Frankl's man's search for meaning.
The idea that you can find purpose even
in the most dire of circumstances.
The challenge is to decouple your value
as a human from your economic output.
You are valuable because you exist,
not because you produce widgets for a company.
That is a profound psychological shift for most of us.
And finally, after all of that comes acceptance.
The elixir, the artisan's awakening.
This is embracing the abundance.
It's using these powerful tools to expand your human potential
rather than fighting them as a threat.
It's the deep core realization that the AI isn't replacing you.
The conscious being is replacing the drudgery,
the boring parts of your job you never liked anyway.
It's a lot to process.
I mean, going through all five of those stages
is a journey in itself.
And while individuals are going through this,
there are real world societal consequences happening.
We need to talk about the edge cases
and the broader implications where Rom highlights.
Absolutely.
One of the biggest and perhaps most concerning
is what he calls the laggard penalty.
The laggard penalty, what's that?
So Schumer's article has 40 million views.
That's great.
It's the democratization of awareness.
But there are 8 billion people on the planet.
Those who adapt early, the people listening to this,
the people who are building AI literacy right now,
they gain massive compounding advantages.
And those who wait.
Those who wait or those who don't have access
to this information or who stay in denial,
they face abrupt catastrophic displacement.
The penalty for being a laggard in this transition
is not small.
It's potentially devastating.
So inequality, which is already a huge problem,
could skyrocket during this 5,000 day transition.
It seems extremely likely.
Ramella points out the global impact,
specifically on developing economies.
Think about a country like India,
with its massive coding and BPO business process
outsourcing workforce.
Oh, right.
Millions and millions of jobs answering phones,
writing basic code, handling data entry.
It's been the backbone of the global tech economy
for the last 20 years.
Those are the exact jobs in the direct crosshairs
of GPT 5.3 and Opus 4.6.
The disruption there will be felt much faster
and much harder than in some other sectors.
If the AI can handle tier one customer support
or write basic Java code for nearly free,
that entire economic model collapses overnight.
That is a very sobering thought.
It's not just about my career in a wealthy country.
It's about global stability.
And then there's the flip side of deskilling,
the preservation of wisdom.
We touched on a loss of tacit knowledge,
but Ramella emphasizes the need for individuals
to conduct personal vulnerability audits.
A vulnerability audit, that sounds like something
a corporation does for its cybersecurity.
And you should treat your life and career
like a corporation in this instance.
You need to sit down, take a deep breath,
and look at your skills honestly.
What do I do every day?
What parts of my job can GPT 5.3 do right now?
Be brutal.
No ego.
Zero ego.
If the honest answer is it can do 80% of what I do
for a living, then you are vulnerable.
You need to identify that the remaining 20%,
the tacit knowledge, the human intuition,
the relationships, and double down,
triple down on that.
That is your lifeline.
That's a fantastic actionable step.
So let's try to synthesize all of this
for the listener who is maybe feeling that jolt
from Schumer's article right now.
We've thrown a lot of heavy, heavy concepts at them.
The core message is this.
Matt Schumer is right.
Something big is happening.
The signal is loud, and you must listen to it.
But Brian Ramella is also right.
We have a map.
The 5,000 days isn't a doomsday clock.
It's a countdown to abundance if we navigate it correctly.
So the takeaway isn't panic, it's prepare.
Exactly, don't panic, prepare.
So immediate steps for you, the listener.
First, do that vulnerability audit.
We just talked about tonight.
Look at your skills, be honest.
Second.
Second, experimentation.
Do exactly as Schumer says, use the tools daily.
Develop an intuition for them.
Understand the alien mind to some people call it.
Don't just treat it like a better Google.
Talk to it, push it, see where it breaks,
see where it surprises you.
That's how you build literacy.
And third, and this feels really important.
Don't do it alone.
Crucial, absolutely crucial.
Find the allies that are mentioned in the hero's journey.
Read multiplex.com is a fantastic resource,
but also find your local peers, your friends,
your colleagues who are awake to this.
Community is the lifeboat that will get you
through the dark night of the soul.
You need people who speak this new language.
We've covered the signal, the map, the syllabus,
the myth, and the grief.
It's a heavy one today, but I have to say,
I feel clear, not less concerned, but more clear.
Clarity is the goal.
We can't stop the wave, but we can learn to surf.
I want to leave the listener with one final thought,
one question to mull over.
You mentioned Ramele's endpoint.
The return with the elixir, the artisans awakening,
this idea that we will eventually work only for the love
of it, for the craft.
The ultimate destination.
So here's the question for you, listening right now.
If the return with the elixir is real,
if you woke up tomorrow and never had to earn another paycheck
again, because AI and automation handled all the survival
stuff, what is the first thing you
would create?
Not what would sell, not what your boss would approve of,
not what would get likes on social media.
In the quiet of your own mind, what would you make?
What would you do?
That answer, whatever it is, that might just be your lifeboat
through the storm.
Thanks for diving in with us.
We'll see you in the transition.
Stay serious.
And keep reading.
And keep reading.
And keep reading.
And keep reading.
And keep reading.
And keep reading.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.
