Loading...
Loading...

* What are they doing? Using AI but afraid to say so.
The human adaptation to technological upheaval over the "5000 days" horizon, spanning from late 2025 to an envisioned renaissance around 2039 serves as both a chronometer and a crucible. This period, I have dubbed the Interregnum, encapsulates the turbulent transition from labor-defined existence to one of liberated potential, where AI reshapes not just economies but the very fabric of identity and purpose. As we delve into Part 10, it's imperative to contextualize this moment within broader historical precedents: epochs like the Agricultural Revolution, which deskilled hunter-gatherer instincts while reskilling agrarian societies, or the Industrial Revolution, which mechanized craftsmanship yet birthed modern innovation
Read more at: ReadMultiplex.com
Okay, let's unpack this.
Let's do it.
Because there is a massive, silent, and frankly, awkward elephant in the room right now.
And I don't mean a metaphorical elephant quietly standing in the corner of a boardroom.
I mean an elephant that is sitting right there in your home office, right there in your
cubicle.
And definitely right there in the coffee shop where everyone is huddled over their laptops.
It is the single biggest open secret of the modern workforce.
Everyone knows it's there, but nobody is making eye contact with it.
Exactly.
You're doing it, but no one wants to admit it.
And we need to describe the scene because I think every single listener is going to recognize
this moment.
Oh, absolutely.
You're at work.
You're writing an email, or maybe you're trying to debug some code, or you're drafting
a strategy document for the Q3 review.
And you have a specific tab open.
Maybe it's chat GPT.
Maybe it's clot.
Maybe it's grok.
You're in the flow.
You type in a prompt.
You get an answer.
You copy.
You paste.
You tweak the language so it sounds a bit more like you.
And you hear footsteps.
That's the moment.
Someone walks by your desk, where your boss comes up behind you.
What do you do?
Snap.
Alt tab.
Minimize window.
Immediate reflex.
Alt tab reflex.
It is instantaneous.
It's like we are all teenagers in the 90s hiding video games from our parents, except
we are grown adults, professionals hiding the most powerful tool in the history of human
cognition.
We're treating it like contraband.
We're in this incredibly strange psychological state right now where we have access to the superpower
and we are treating it like it's something shameful.
It's the don't ask, don't tell, phase of technological adoption.
It is a fascinating sociological phenomenon if you step back and look at it.
It really is.
We are seeing productivity gains across the economy that would have been unimaginable, literally
science fiction five years ago, yet the cultural narrative hasn't caught up.
Let it all.
There is this deep seated fear that admitting to using AI is an admission of, well, cheating.
That's the word that keeps popping up, cheating or laziness.
Right.
But there is a darker fear underneath cheating.
The fear isn't just that you'll get a slap on the wrist.
No, it's much deeper.
The fear is that admitting you use AI is admitting that you, the human, are no longer necessary.
That is the core tension.
Yeah.
It's the nightmare scenario.
The, hey, boss, the AI wrote 80% of this report.
The fear is the boss says, great, the AI cost $20 a month and you cost a lot more than
that.
Why am I paying you?
Precisely.
And that fear is what we are going to dismantle today because if you remain in that state
of hiding, of shame, of looking over your shoulder, you remain a victim of the technology.
You're reactive.
You're letting the tool happen to you, but there is another way.
Welcome to the Read Multiplex podcast.
Today we are going back into the mind of the man who I think is seeing this clearer than
anyone else.
Brian Neurommel.
Yeah.
And I have to say, every time we dig into his work at Read Multiplex, I feel like my brain
expands about three sizes.
Brian really has no peer in this territory and I don't say that lightly.
You have plenty of analysts talking about stock prices or which chatbot has the best benchmarks
this week or what the latest rumor is from Silicon Valley.
But Brian is doing something completely different.
He's on a different level.
He is mapping the societal soul during this transition.
He's looking at the anthropology of the future.
He's not looking at the next quarter.
He's looking at the next generation.
And today we are unpacking part 10 of his massive 5,000 days series.
A huge undertaking.
Now, for anyone joining us who hasn't been following along with the Read Multiplex deep
dives, let's set the stage.
What exactly are the 5,000 days?
The 5,000 days is the specific timeline Brian has laid out, roughly spanning from late
2025 through to the late 2030s or early 2040s.
So about 14, 15 years.
Exactly.
He calls this period the interregnum.
Interregnum.
That is such a heavy evocative word.
It sounds like something out of Game of Thrones or history of the Roman Empire.
It is a historical term and Brian uses it very deliberately.
An interregnum refers to a pause or gap between two reigns.
Between two kings.
The old king is dead.
The new king hasn't been crowned yet and the laws are suspended.
It's a time of chaos, but also a time of malleability.
So in our case, who are the two kings?
What are the two reigns?
Brian argues we are leaving the reign of labor as identity, where who you are is defined
by your 9 to 5 job and we are moving toward the reign of liberated potential.
But we aren't there yet.
We are in the messy chaotic middle.
We are in the gap between the trapeze bars.
You've let go of the one behind you, the old economy, but you haven't quite caught
the one in front of you.
And that is a terrifying place to be.
You're in mid-air.
Exactly.
And part 10 of this series is specifically designed to help us navigate that terror of
being in mid-air.
Okay, so what's the mission for today's deep dive?
The mission for today is to flip the script on that secret AI user stigma we started with.
We need to move from being victims of automation to becoming what Brian calls conductors of
a new symphony.
We love that image, the conductor, because a conductor doesn't play every instrument,
but the music doesn't happen without them.
That's the goal.
Brian introduces the concept of marshalling AI.
It's not about checking a box or just using a tool.
It's about existential amplification.
Existential amplification, wow.
But before we get to the solution, we have to understand the journey we've been on, because
Brian frames this entire 14-year transition using Joseph Campbell's hero's journey.
Which is such a brilliant way to look at economics.
It acknowledges that this isn't just a market shift.
It's a mythology.
It's an emotional narrative that we are all living through.
Absolutely.
So let's do a quick, previously on for the listeners.
Walk us through the hero's journey that has led us to part 10.
So it begins with the ordinary world.
That was part one of Brian's series, our old normal.
That's the world we are just starting to leave behind the evolutionary psychology of
work as status.
We are wired to work because for thousands of years, if you didn't work, you didn't
eat and you didn't have status in the tribe.
Right.
Labor was synonymous with survival and status.
Labor was identity.
Right.
And then comes the call to adventure, which in this story is the arrival of this high-level
generative AI.
But what does the hero always do first?
They refuse the call.
Every time.
That was parts two and three.
The refusal and the grief.
We saw denial.
Anger.
Brian referenced Elizabeth Kebler Ross's stages of grief.
And Kurt Vonnegut's novel, Player Piano.
Exactly.
It's that visceral reaction of this machine is going to replace me and I hate it.
I think a lot of people are still lingering in that phase of rage against the machine
phase.
They are.
And that's natural.
It's a valid stage.
But the journey pushes forward.
We moved into the reframing phase in parts four through six.
Okay.
Reframing.
This was about using cognitive techniques like those from Scott Adams to reprogram your
brain to see the threat as an opportunity.
This is where it got dark for a minute, though.
It did.
We faced the dark night of the soul confronting the reality of zero human companies, companies
that run entirely on code.
That was the scary part, the belly of the whale.
But then we hit a turning point in part seven through nine.
The reward and the road back.
This is where we start to see the light.
We looked at Iron M. Banks' culture series envisioning a post-scarcity world.
It could life be like without mandatory labor.
And part nine had that fantastic twist, the artisan's awakening.
Oh, that was such a great insight.
Oh, right.
That was the realization that blue-collar work plumbers, electricians, carpenters, it's
actually safer from AI than white-collar work.
Because of more of X paradox.
Explain that one more time.
It's the paradox that things humans find hard, like abstract reasoning or chess, are easy
for computers.
But things we find easy, like physical dexterity or navigating a cluttered room, are incredibly
hard for computers.
So a robot is great at passing the bar exam, but it's terrible at fixing a leaky sink in
a tight cupboard.
Exactly.
So the hierarchy of work flipped on its head.
The lawyer became more vulnerable than the plumber.
So now here we are at part 10.
We are on the road back.
We've been through the underworld.
We've seen the potential reward.
We know AI is here.
We know the economy is changing.
We know we need to adapt.
But we are stuck.
We are stuck in a psychological trap.
Brian calls it the pressure paradox.
The pressure paradox.
Okay.
Does such a profound insight from Brian?
Let's unpack this paradox.
Because it feels counterintuitive.
If we know we need to use AI to keep up, and if the market is putting pressure on us to
be more productive, shouldn't that pressure make us better at it?
Shouldn't it force us to master the tools?
You think so, logically it makes sense.
Brian argues that when the pressure to use AI comes from the outside, from your boss,
from the market, from society, screaming, adapt, or die, it actually hinders true master.
It backfires.
It creates resistance.
Because nobody likes being told what to do, it feels like a mandate.
It gives deeper than just teenage rebellion.
Brian invokes the philosopher Martin Heidegger here, specifically his concept of technology
as Gistell or Enframing.
Okay.
Let's take a beat here.
Gistell.
Enframing.
That sounds like a heavy German pastry, but I know it's deeper than that.
Break it down for us.
So Heidegger warned that technology isn't just a tool.
It's a way of revealing the world, the lens, and when technology inframes us, it frames
the world in a way that turns everything into a resource to be mined, a standing reserve.
Give us an analogy.
It could be a forest.
If you look at a forest with a poetic mindset, you see an ecosystem, beauty, mystery, you
see the forest as a forest.
But if you look at a forest through the lens of industrial technology through Gistell,
you don't see a forest.
You see lumber.
You see standing reserve waiting to be processed into paper or two by fours.
Okay.
Follow.
You stop seeing the thing itself and start seeing its utility, its resourceness.
Exactly.
And now apply that to yourself and AI.
When you are forced to use AI by a corporation to hit a quota, you aren't engaging with
it as a creative partner.
You're just trying to get the job done.
You are inframing yourself.
You become the standing reserve, your intelligence, your creativity.
It's just a resource to be optimized by the algorithm.
You become a cog in the machine trying to keep up.
That makes total sense.
If my boss says, use chat GPT to write five blog posts by noon, I'm not exploring the
tools potential.
I'm not playing.
I'm not discovering.
I'm just trying to get the task off my plate so I don't get fired.
I'm treating the AI in myself as just a means to an end.
Precisely.
You become inauthentic.
And this is where Brian connects Heidegger to the psychologist Carl Rogers.
Okay.
Rogers talked about the conflict between self-concept and organismic experience.
Translate that for the non-psychologist among us.
Your self-concept is the story you tell yourself.
I am a skilled writer.
I spent years honing my craft.
I am thoughtful and articulate.
It's your professional identity.
Okay.
Got it.
But your organismic experience, your reality in the moment at your desk is, this AI just
wrote a generic but passable version of my article in four seconds.
Ouch.
Yeah, that creates some serious friction.
A real gut punch to the ego.
It's cognitive dissonance.
It hurts.
It's an attack on your very sense of self.
And the result is a defensive mechanism.
It put up a wall.
You do.
We either reject the tool entirely.
That's the leadite response.
This thing is garbage.
It has no soul.
I'm a real writer.
Or we use it superficially and secretly.
That's the cheating response we talked about.
We use it to high our perceived inadequacy rather than using it to expand our capacity.
So how do we break out of that?
Because we can't just quit our jobs.
We have to use these tools.
How do we move from cheating to, well, winning?
The key word for part ten is volitional integration.
Volitional.
Meaning for my own will, I have to choose it.
You must choose the tool.
The tool cannot be forced upon you.
This is where Brian brings in Jean-Paul Sarcher, existentialism.
Wow.
We're hitting all the big thinkers today.
Brian pulls from everywhere.
Sarcher said we are condemned to be free.
We must choose our own essence.
If you let your boss or the market force AI on you, you are acting like what gamers would
call an NPC.
A non-player character.
You're just following a script.
You're an object in someone else's story.
But if I decide on my own terms, I am going to master this thing because I am curious
about what I can build with it, then I'm the protagonist again.
Exactly.
It shifts from, I have to use this to, I get to use this.
And that shift in your internal posture changes everything about how your brain engages
with the technology.
It becomes play.
It becomes discovery.
It becomes authentic.
You know what I love about Read Multiplex?
Is it Brian doesn't just leave us with a philosophy?
He grounds it in history.
He's a master of historical analogy.
It's comforting to know that this feeling of technological angst isn't new.
He shows us that we've panicked about this before many, many times.
He is brilliant at this.
He outlines four specific instances where we freaked out about a new tool, only to realize
later that it didn't replace us.
It elevated us.
Let's go through these because I think they really helped round the fear.
The first one is the calculator in the 1970s.
Oh, the panic was real.
We forget this now, but when the pocket calculator arrived, educators were terrified.
They thought it was the end of civilization.
They thought it would rot children's brains.
Who is a modern version of Plato's fear of writing?
Plato thought if we wrote things down, we'd lose our memory.
Teachers thought if we used calculators, we'd lose our ability to do math.
And to be fair, I can't do long division in my head to save my life.
Yeah.
So were they right?
Did it rot our brains?
In a narrow sense, yes, we did de-skill in mental arithmetic.
Most of us can't do complex calculations without a machine.
The Brian points out that we re-skilled in something far more valuable, modeling and
complex problem solving.
Right.
We stopped spending three hours doing the division and started spending that time figuring
out what to divide to build a bridge that doesn't collapse.
Exactly.
You offloaded the low-level cognitive task to the machine, which freed up your brain
for higher-level strategic thinking.
So what was the lesson from that transition?
The lesson is those who refused the calculator, the engineers who insisted on using slide rules
because it was pure or the real way, became obsolete.
They couldn't keep up.
They couldn't keep up with the speed of calculation required for modern engineering.
Those who mastered the calculator became the architects of the modern world.
Okay.
Analogy number two.
The spreadsheet.
1980s.
Visit Calc.
This one is fascinating for the financial world.
Before Visit Calc and Lotus 1-2-3, banking was done on paper ledgers.
It was slow, meticulous, artisan work.
You had to be incredibly careful.
One wrong number and the whole thing was off.
Exactly.
And the Wall Street bankers feared the spreadsheet.
They called it a black box.
They were afraid of errors they couldn't see.
And there was another fear too, wasn't there?
Something more intuitive.
Yes.
They felt they were losing financial intuition, the Greek concept of furnaces or a practical
wisdom.
I can imagine the old school bankers saying, you don't feel the numbers if you don't
write them.
That was precisely the argument.
The physical act of writing connected them to the flow of capital.
And yet the spreadsheet created the entire fintech industry.
It allowed for what if scenarios, what if interest rates go up?
What if the market crashes?
You could run a thousand models in an afternoon.
It was a superpower.
The Bryant does add a warning here, right?
It wasn't all sunshine and rainbows.
No.
And this is a crucial point for our current moment with AI.
He points out that the 2008 financial crash happened in part because people trusted the
black box without understanding it.
They were using the tool, but not marshaling it.
Exactly.
They let the tool lead them.
They ran complex models on mortgage backed securities without having the furnaces to ask
wait.
Does this actually make sense in the real world?
We need the human in the loop.
The most effective bankers were those who combined their human intuition with the spreadsheet's
power.
Nietzsche's will to power, augmented by Excel.
That's the hybrid model.
Got it.
Analogy number three.
The dictaphone versus stenography.
This goes back to the early 1900s.
Stenographers, people who wrote shorthand were artists.
It was a highly skilled profession.
You had to train for years.
A real craft.
Then comes the dictaphone.
You speak.
It records on a wax cylinder.
And the fear was the dehumanization of language.
The loss of the art of shorthand.
That sounds very familiar to the complies about AI writing today.
It has no soul.
Exactly.
But the dictaphone exploded productivity.
And Brian makes the direct comparison to otter.ai or AI transcription today.
So what happened to the stenographers?
Did they all lose their jobs?
The role evolved.
We aren't listening less.
We are synthesizing more.
The stenographer didn't disappear.
They evolved into the executive assistant, the editor, the strategist who could take the
raw transcript and turn it into actionable intelligence, again, moving up the value chain.
Okay.
And the final analogy, which feels the most direct spell check.
The fear that we would forget how to spell and lose our vocabulary.
Which, again, kind of happened.
My spelling is atrocious, without red squiggly lines.
Mine too.
But did literature die?
Did content disappear?
No, it exploded, blogging, social media, self-publishing, everyone became a writer.
It allowed for a content explosion because it lowered the barrier to entry.
And this is where Brian delivers such a powerful line.
He argues that generative AI-LLMs, like chat GPT, are just spill check for ideas.
That is a sticky phrase, spell check for ideas.
It is.
It's not generating the core idea for you.
You still have to have the insight, the vision.
But it helps you check your structure, your logic, your clarity.
It helps you articulate the idea better.
It doesn't provide the soul.
It polishes the vessel.
Beautifully put.
So we've looked at the history.
We know that resisting the tool leads to obsolescence.
We know that forced adoption leads to resentment and inauthenticity.
So how do we actively take control?
This brings us to the core solution of Part 10, Martialing AI.
Marcial.
It's such a deliberate word choice by Brian.
Think about what a marshal does, a field marshal in an army.
They command.
They arrange the troops.
They have a strategy.
Exactly.
It implies order, command, arrangement.
A marshal isn't a passenger.
They are an authority figure.
It is active, not passive.
It's the antidote to that secret AI user stigma.
You aren't hiding the tool.
You are orchestrating it.
You're proud of it.
Brian leans on Marshal McCluhen here, the famous media theorist.
McCluhen famously said that all technologies are extensions of man.
The wheel is an extension of the foot.
Glasses are an extension of the eye.
A hammer is an extension of the fist.
Brian views AI as an extension of the central nervous system.
That's a big leap.
The wheel extending the foot makes sense.
It helps me move.
But an extension of the nervous system, that sounds almost biological, a cyborg concept.
Think about what your nervous system does.
It processes signals from the world.
It senses patterns.
It coordinates responses.
It learns.
Okay.
Brian argues that AI magnifies your existing cognitive strengths.
This is crucial.
AI is not a replacement for you.
It is an amplifier of you.
Can you give you a concrete example?
How does it amplify me?
Sure.
Let's say you are naturally a very empathetic person.
You're a teacher or a therapist.
You might fear AI because it seems cold and calculating.
A common fear.
But if you marshal it, you can use AI to scale your empathy.
How does a machine scale empathy?
That feels like a contradiction.
Imagine you have a class of 30 students from your human empathy that Johnny needs visual
metaphors to understand history and Sarah needs strict logical bullet points and Michael
learns best through stories.
Right.
But I only have so many hours in the day.
I can't write 30 different lesson plans.
But the AI has infinite time.
So you, the marshal, the conductor, tell the AI.
Take this core lesson on the American Revolution.
Now create a version for Johnny that uses metaphors about sports teams.
Create a version for Sarah that is a logical outline.
And create a narrative version for Michael.
Wow.
The AI handles the data processing, the grunt work of writing.
You provide the empathetic direction.
You've used the machine to extend your human care to more people more effectively.
And if you're on the other end of the spectrum, if you're highly analytical, then it scales
your data processing.
You can feed it a million rows of sales data and say, find the three most counterintuitive
correlations.
You become a super analyst.
Your unique human ability to ask good questions is amplified.
So the practical skill here, the thing we actually need to learn to be a marshal is prompt
engineering.
Right.
But I feel like that term has gotten a bad rap lately.
It sounds like learning cheat codes or just trying to trick the bot.
Brian says it's more than that.
The argues that prompt engineering is a core competency, not a fad.
And he reframes it.
He says it's about metacognition.
Metacognition thinking about how you think.
Whoa.
Yeah.
That sounds like inception, but it's true.
Explain that.
How is writing a prompt thinking about thinking?
Most people use AI easily.
They type in write an email to my team about the new project.
That's low metacognition.
That's the victim mindset.
You're asking the AI to do the thinking for you.
Exactly.
You're abdicating responsibility.
And the marshal.
The conductor.
The marshal has already thought about the strategy.
Their prompt sounds like this.
Act as a project lead.
Draft an email to a team of engineers who are feeling burnt out.
The tone should be inspiring, but also acknowledge their hard work.
The goal is to get them excited about project Phoenix without minimizing the challenges.
Reference our past success on project eagle.
Generate three variations.
One concise, one detailed, and one more informal.
I see the difference.
To write that prompt, I have to know what the strategy is in the first place.
I have to understand my team's emotional state, our history, the project goals.
Exactly.
You have to be the expert to guide the AI.
It pushes you to be clearer, smarter, and more intentional.
Paradoxically, to use AI well, you have to be more human, not less.
And it's not just text anymore.
This is evolving so fast.
We're talking about multimodal integration.
This is where the conductor metaphor really sings.
We have text.
We have images with tools like mid-journey.
We have code.
Voice.
And a whole orchestra.
A whole digital orchestra.
And the AI symphonist is a new archetype Brian talks about.
It's someone who can weave all these streams together to create something holistic.
Brian references Kant's aesthetics here, which is not what I expected in an essay about
AI.
He does.
The idea of blending human creativity with algorithmic serendipity.
Curendipity.
That's a beautiful word for software.
I usually use other words.
Well, think about it.
It's when the AI gives you something weird.
When you didn't expect, maybe a phrase you wouldn't have chosen, or an image that's
slightly off-kilter, a happy accident.
Right.
It makes a mistake, but it's an interesting mistake.
The amateur rejects it because it's wrong.
The artist, the marshal, uses it.
They jam with it.
It creates something neither the human or the machine could have created alone.
I love that.
It's like jazz.
You're riffing with the machine.
That's the highest level of marshaling.
It's co-creation.
So if we embrace this, if we become marshals, what does the future of work actually look
like?
Because let's be honest, the job as we know it is dying, isn't it?
The job is evolving into the portfolio career.
Brian predicts a shift from labor to leverage.
Labor to leverage.
That's the bumper sticker for the next decade.
We have data from McKinsey suggesting that Centaur's humans plus AI are 20 to 30 percent
more productive than humans alone.
The Centaur half human half machine.
But it's not just doing the same old job faster.
It's about entirely new vocations, new ways of creating value.
Let's speculate on some of these new roles based on Brian's analysis.
What are the jobs of the interregnum?
One major one is the AI curator.
Curator, like in a museum.
Similar concept.
As AI generates infinite content, infinite art, infinite text, infinite code, the cost of
creation drops to zero.
Right.
Supply becomes infinite.
Creation is free.
What becomes valuable?
Discernment.
Knowing what's good.
Curation.
The ability to look at 100 AI-generated options and say that one.
That's the one that resonates.
That's the one that tells the truth.
That's the one with taste.
So taste becomes a marketable skill.
The ultimate human skill.
Taste is the ultimate skill in a post-scarcity creation economy.
If you have good taste, you can wield armies of AI to execute your vision.
Then you have AI ethicists or auditors.
The watchdogs.
The referees.
As models who loosenate or show bias, we need the human and the loop to be the moral arbiter
to make sure the machines are fair.
Brian references John Rawls' veil of ignorance.
We need humans to check the code and ensure it treats everyone fairly to audit the black
box.
We can't just trust it blindly.
That's a crucial role.
What about something more personal?
The personal AI trainer.
Like a gym trainer, but for your bot.
In a way, it's about data sovereignty.
This is a big theme for Brian.
Right now we feed our data to big companies like Google and OpenAI.
We're making their models smarter with our lives.
Exactly.
But in the future, you will want to train models on your own proprietary data, your emails,
your journals, your body of work, your private thoughts.
You need someone who can help you build a digital U that stays under your control.
So I can have an AI that thinks like me that can draft responses in my voice, but works
while I'm sleeping.
And that you own completely.
That's the goal.
And finally, the narrative architect.
That sounds grand.
What does the narrative architect do?
It is.
It's using AI to build worlds, not just paragraphs.
Because you can generate code, art, and text simultaneously, a single person can now build
a video game or a movie or an interactive novel.
The barrier to entry for world building has collapsed.
You don't need a whole studio anymore.
You can be a studio of one.
It's exciting, but it's also daunting.
We are moving from a world where we work for survival to eventually a world where we
work for meaning, but we are in the interregnum.
The messy middle.
And the interregnum requires hustle, requires adaptation.
And Brian is clear.
This isn't a utopia yet.
It's a transition.
It's going to be bumpy.
There will be winners and losers.
So as we wrap up, let's bring it back to the listener.
The person standing at their desk right now, maybe listening to this on headphones, ready
to all tab away from chat GPT if their manager walks by.
What is the final provocation?
The provocation is stop hiding.
Say it louder for the people in the back.
Stop hiding.
Admit you are using these tools, but, and this is critical, frame it as mastery, don't
say, uh, yeah, I used AI because I was stuck, say, I used AI to explore 10 different strategic
angles and synthesize the best one and half the time.
Change the narrative.
You aren't cheating.
You're conducting.
You aren't lazy.
You're leveraging.
Don't let your company dictate how you use AI.
Let society shame you because here is the brutal truth of the 5,000 days.
The compound interest of wisdom is real.
Explain that compound interest of wisdom.
Those who start marshaling AI now, figuring out the pumps, the workflows, the synthesis,
the co-creation will be so far ahead in five years won't even be funny.
Their skills will compound.
And those who resist.
They are the engineers cleaning to their slide rules while the calculator users are building
skyscrapers.
They will fall off the curve.
It's not a threat.
It's just historical gravity.
What has happened with every major technological shift?
Brian leaves us with a beautiful thought to close out, part 10.
He says, this is the inter-ignum, but it's if not purgatory, it is your time to transition
from moment to moment ahead with grace.
That's the goal.
Grace.
Not panic, not denial, but a graceful, active participation in our own evolution.
We have to be the architects of destiny, not the victims of the algorithm.
Well said.
Remember, this is only part 10.
The journey continues.
There are still thousands of days left in the inter-ignum.
Plenty of time to learn and to start marshaling.
Thanks for diving deep with us.
Go open that tab and don't close it when someone walks by show them what you're building
will see you in the next deep dive.
Keep conducting.
Keep conducting.
Keep conducting.
Keep conducting.
Keep conducting.
Keep conducting.
Keep conducting.
Keep conducting.
Keep conducting.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.
