Loading...
Loading...

We stand on the precipice of a transformative era and I've been chronicling this journey through the "You Have 5000 Days" series, a deliberate exploration of how artificial intelligence and robots are reshaping the very essence of human work and purpose. Drawing from decades of observing technological evolution,from my early days tinkering with AI in the 1970s to witnessing the rapid advancements of today these articles I sincerely hope will serve as a roadmap for navigating the end of traditional labor as we know it. We may not love this or we may love this, but it is the wave heading to us. These articles blend philosophical insights, historical parallels, and practical strategies, framed through the timeless structure of the Hero's Journey, to help readers confront the inevitable shifts with resilience and optimism. My motivation stems from a deep belief that forewarned is forearmed: by illuminating the path ahead, I aim to empower individuals to reframe disruption as opportunity, fostering a collective awakening to an Age of Abundance where human potential is unleashed from the chains of obligatory toil.
Yet, this series is not merely my theoretical musing; it's a call to action amid accelerating realities. Because the first massive milestone has been reveled. We are living trough history that the future will look back upon. We've journeyed from the evolutionary roots of work and the grief of its impending loss, to reframing abundance and embracing deskilling as liberation. Each installment builds toward personal and societal preparation, urging shadow work, skill diversification, and communal support. Why embark on this endeavor? Because I've seen the patterns unfold, trillions in lost innovation buried in corporate graves, now resurrectable by machines and I refuse to let humanity stumble blindly into this future. These writings are my contribution to a dialogue that must happen now, before the 5000-day horizon closes, ensuring we emerge not as victims of change, but as architects of a thriving post-work world.
Welcome back to the deep dive. It is January 26th, 2026. And we are. We are deep in the timeline.
We are. And usually when we start these sessions, we're looking at a trend or maybe a specific product
release, a piece of legislation. But today, today feels different. It feels heavier, more significant.
It does. We are looking at a milestone. And I want to be very precise with that word.
A milestone isn't just news. It's not an update. Right. A milestone is a marker in the ground
that tells you that you have crossed a border and you know, you can never go back. Right.
We've been tracking this concept of the 5,000 days. This this 13-year stretch leading us into the mid-2030s
where the nature of work, the nature of the economy, it essentially dissolves and reforms into
something else entirely. And for a long time, that has been a very theoretical conversation. Oh,
completely. We've been looking at charts. We're looking at exponential curves, talking about what
might happen. But today, the theory ends. The canary and the coal mine. Stop singing.
We are looking at the first historic confirmation of the AI employee. The AI employee.
Specifically, Claude bot. And with it, the rise of zero human company. The zero human company.
It is a HC. The ZHC. That's the term. We absolutely need to unpack this because zero human company
sounds like, I don't know, it sounds like the antagonist in a cyberpunk novel. It really does.
But before we get into the nuts and bolts of the tech, let's just situate ourselves. You
mentioned the 5,000 days. Where are we right now in that journey? That's the perfect question.
If we use the hero's journey as our map and our primary source for today, the architect of this
whole new reality, Brian Rommel, leans very heavily into this metaphor. He does, yeah.
We are no longer in the ordinary world. We aren't even at the threshold anymore. We have,
you know, refused the call. We've met our mentors. And now, now we are entering the innmost cave.
The innmost cave. That sounds it. Frankly, it sounds terrifying.
It's supposed to be. In mythology, the innmost cave is the place of the ordeal. It's where the
hero has to face their greatest fear. It's the dark night of the soul. And that's the theme.
That is the theme of today's deep dive. Not just the shiny tech, not just the code,
but the psychological ordeal we're all about to go through as a concept of having a job begins to
well, to evaporate. And the catalyst for this dark night isn't some government report.
It's not a press release from OpenAI or Google. It's a garage experiment.
Precisely. And that's what makes the story so powerful, so compelling.
The hero of this particular journey is Brian Rommel.
Let's talk about Brian, because he's not a new name to people who follow the space.
But he's also not your typical Silicon Valley CEO type. He's not trying to sell you something.
No, far from it. Brian is.
Well, the best way to describe him is as a voice. He's been tinkering with AI and voice interfaces
since the 1970s. So the 70s? I mean, what does AI in the 70s even look like?
Punch cards? We're talking very rudimentary systems, Eliza, stuff like that.
But he was there. He's seen the cycles. He's seen the AI winters in the AI summers.
But more than that, he's an observer of the evolutionary roots of work. He looks at technology
through the lens of anthropology. So he's not just a programmer. He's a philosopher of technology.
A perfect way to put it. And he's the one who built this, the zero human company.
He is the architect. He is. He ventured into the digital underworld
into these forgotten archives and brought back the fire.
And what he has demonstrated, what we are going to break down for you today,
is the democratization of the AI employee.
Democritization. That's a word that gets thrown around a lot in tech.
Usually it just means, you know, we made an app.
Right. Or we put a web interface on it.
But in this case, it means something truly radical.
It means that the power to run a multinational corporation,
the research, the development, the logistics, the strategy, all of it.
It is no longer the exclusive domain of the Fortune 500.
It is now available to anyone with a cut and paste command.
Okay. I want to pin that the cut and paste concept.
Because that sounds almost too easy. But first, let's talk about what this company actually is.
You said zero human company. Does that mean he, like,
file papers in Delaware for a company that lists a robot as the CEO?
No, and that's a crucial distinction.
We aren't talking about a legal entity yet.
The law hasn't caught up to this reality. It's not even close.
So it's a functional thing, not a legal thing.
It's a functional organism. That's the best way to think of it.
It is a system that performs the functions of a company.
It sets goals. It executes tasks. It manages resources.
It produces a product. It just happens to have zero biological neurons involved in that loop.
Okay. So a functional reality.
Now, what was the mission of this first ZHC?
Because it wasn't just built to prove a point. It had a job.
It did. And its job wasn't to sell ads or trade crypto.
It was built for what Roelho calls dumpster data archaeology.
Dumpster data archaeology. I have to say, I love this term.
It evokes such a specific image. It sounds like Indiana Jones.
But instead of a whip, he has a SATA cable.
That's a fantastic image.
And it's shockingly accurate.
Think about the last 40, 50 years of Silicon Valley.
Think about the .com crash.
Think about the clean tech bust in the 2000s.
Think about all the startups that raised 50,
$100 million did three, four, five years of intense R&D.
And then what bankrupt?
Right. They run out of runway.
The VCs pull the plug and the company just folds.
The doors are locked.
And what happens to all that data, all that research,
all those lab notebooks.
I'd assume it just disappears.
Servers get white and sold on eBay.
Oh, that's incredible.
Well, they get put in a storage unit and forgotten about.
Exactly.
Or the hard drives literally end up in a landfill.
In a dumpster.
Roelho realize that there are these graveyards of corporate failure
that are actually gold mines of science.
We're talking materials, science, physics, chemistry, pharmaceuticals.
Can you give me a specific example?
Like what kind of value are we talking about here?
Okay, think about a company from, say, 2008.
They spent five years in $80 million researching
nanoparticle-enhanced photovoltaics, so next gen solar panels.
They got 90% of the way to a major breakthrough.
But they couldn't get the manufacturing cost down below a certain threshold.
So the market turned.
They went bust in 2011.
That science isn't wrong.
It's just unfinished.
And it's just sitting on a hard drive somewhere,
gathering digital dust.
Roel estimates there are trillions of dollars.
Trillions with a T of lost R&D buried in these digital dumpsters.
So his quest, his hero's journey, was to recover this lost knowledge.
He managed to legally acquire a massive trove of this data.
How massive are we talking?
Six terabytes.
Six terabytes of text.
Yeah, an image of a raw, unstructured data.
I mean, for context, the entire library of congress
is what, 15, maybe 20 terabytes of text.
So this is a significant fraction of that.
It's an ocean, a deep dark ocean.
And it's not neat, clean CSV files.
Yeah.
We are talking about unstructured chaos.
Scans of lab notebook with coffee stains on them from 1982.
Wow.
Internal memos written in Word 95.
Simulations that only run on obsolete operating systems.
Handwritten scribbles from a physicist on an African.
So a human being literally cannot read this.
Not in any meaningful way.
If you hired a team of 100 PhDs,
it would take them decades just to sort through it,
let alone understand it.
Correct.
Which is why it was considered trash.
It had zero economic value, because the cost
to extract the insight was higher than the value
of the insight itself.
So it's till now.
Until now.
And to the zero human company.
Enter the CHC.
Ramella built a system to digest this archaeology.
But here is the detail that completely blew my mind.
And I think this is the most punk rock part of the whole story.
OK.
He didn't do this on a supercomputer.
He didn't rent a cluster of H-100s from Nvidia.
Right, usually when we hear about training AI
or processing big data, we imagine a liquid-cooled data center
with armed guards at a billion dollar power bill.
Ramell built the first zero human company
on a 12-year-old MacBook.
Wait, hold on.
At 12-year-old MacBook, we're talking,
what, like a 2014 MacBook Pro?
One of the silver ones.
You're around there.
Maybe an old heir.
A machine that most people would have traded
in a recycled by now.
A machine you could buy on eBay for 100 bucks.
That machine struggles to run 10 Chrome tabs today.
How is it possibly running an AI company?
He repurposed it with Linux.
He stripped away all the bloat of macos,
put a lightweight distro on there,
probably something headless, just to the command line,
and turned it into a pure dedicated server.
But even with Linux, that hardware has limits.
I mean, the RAM on a machine like that is tiny
compared to a modern server, the CPU is slow.
And that's the point.
That is the whole proof of concept.
The garage setup.
It's humming away in a cluttered garage,
probably next to a lawnmower in some boxes of Christmas decoration.
The fan must be just screened.
Probably sounds like a jet engine.
But it works.
It's a testament to how efficient these new models can be
and how much you can do with clever orchestration.
So is he running the models locally on that old chip?
Or is it calling out to Cloud APIs?
It's most likely a hybrid model.
He's probably using the Mac as the orchestrator.
The central brain that holds the context,
that manages the agents,
that runs the core logic.
While the really heavy lifting the actual token generation
might be API calls to models like Cloud3 or GPT-40.
So the Mac is the CEO's office
and the data centers of the factory floor.
That's a great way to put it.
The headquarters of the company is that laptop.
The strategy lives there.
That changes the calculus completely.
If the barrier to entry is $100 laptop from eBay
and some API credits,
then the barrier is effectively zero.
That's why we call it the zero human company.
Not just because there are no humans in it,
but because you barely need a human to start it.
Okay, so the hardware is humble almost comically so.
But the software structure,
the anatomy is incredibly sophisticated.
He didn't just write a script that said read all files.
He built a corporate org chart.
He did.
This is the key innovation.
He treated the AI not as a tool, but as personnel.
He created distinct AI agents with distinct personalities,
distinct goals, and distinct authority levels.
It's a society of mine.
All right, let's meet the staff.
Who is running the show?
Who is in the corner office?
The CEO is an agent named Mr. Groc Bartholomew.
Mr. Groc Bartholomew.
It sounds like a character from a Wes Anderson movie or maybe Clue.
It does.
It's based on X.DI's Groc model.
So it has that slightly snarky,
very wide-ranging intelligence.
His role is purely strategic.
He's the visionary.
He doesn't write code.
He doesn't read the PDF.
He just plays golf and makes big pronouncements.
Pretty much.
He sits at the top and says,
we need to find a viable candidate for a new battery electrolyte
in the 1990s data set.
The market is trending towards solid state.
Go.
So he sets the intent.
He points the ship in a direction.
Who actually does the rowing?
That would be Claude Code, the engineer.
The VP of engineering, if you will.
Claude Code.
Okay, I'm seeing a pattern in the naming conventions here.
A little on the nose, but it's descriptive.
Claude Code is the workhorse.
This agent is likely based on Anthropics models,
which are known for being very, very good
at coding and logical reasoning.
So this is the one writing the Python scripts
to scrape the data.
Exactly.
It structures the databases.
It runs the queries.
It handles the hands of the operation,
the actual execution.
It's the one that comes back and says Boss,
I've analyzed 50,000 documents
and found three promising chemical compounds.
And then there's the new hire,
the one that caused all the buzz recently.
Claudebot.
Claudebot.
Claudebot is the red hot new recruit.
It's the eager employee who's always trying
to impress the boss.
It's job is external search and integration.
The eyes and ears on the outside world.
Perfectly put.
While Claude Code is deep in the internal archives,
Claudebot is running out to the internet,
checking the 2026 market price of lithium,
looking for new APIs to integrate,
hunting down a recent academic paper
that might be relevant.
It's the intern run around grabbing coffee
and checking the newswire.
Exactly.
But here is the most critical piece of the whole anatomy,
the piece that stops this from becoming
Skynet in a garage, the regulators.
The regulators.
Roamell calls this the love equation.
He does.
Now, I have to be skeptical here.
The love equation sounds a bit.
Ooh, it sounds like something you'd hear
at a Wallace for Treatment, California,
not in a serious coding project.
Is this just a fancy prompt that says, please be nice?
I have the exact same reaction initially.
It sounds incredibly soft.
But when you look at how he describes it,
it's not a feeling.
It's a mathematical framework.
It's a hard-coded set of ethical weights and balances.
OK, break that down for me.
How do you code love?
You don't.
You define love in this context as benevolence and harm reduction.
It's a vector check.
Before any agent performs an action,
before Claude Code deletes a file,
or before Claudebot sends an email,
the proposed action is scored against this equation.
So it's like a filter.
It's an ethics filter.
Does this action increase entropy in a harmful way?
Does it cause harm to any known entity?
Is it truthful?
Is it aligned with the core mission
of benevolent discovery?
So it's a superior.
It's a digital conscience that sits
on top of the whole neural network.
Precisely.
It intervenes.
Remmel is notes that it intervenes dozens of times a day.
It stops the agents from taking shortcuts
that might be efficient, but unethical or risky.
It's the gargreal.
And speaking of shortcuts and inefficiencies,
there is a detail in the source material
that I found absolutely hilarious
and also deeply profound.
You have these agents, the CEO, the engineer,
the intern, the regulator,
and they are not just passing needle
of JSON files back and forth, they are arguing.
They are bickering.
It's dysfunctional in a delightfully human way.
He's built a simulation of office politics.
Give me a specific example.
What does an AI argument look like?
So imagine the CEO, Mr. Grock, wants to process
the entire six terabytes in one week.
He sets a big, hairy, audacious goal.
Indexed all physics papers by Friday.
Classic CEO moved totally unrealistic timelines.
Right.
And Claude Code, the engineer, looks at the compute resources
on that 12-year-old MacBook and basically says,
resource error, budget exceeded, cannot comply.
It's the classic pushback from engineering.
And then what?
Does the CEO just say, oh, OK, my mistake?
No.
The CEO pushes back just like a human would.
Optimize your code.
Find a workaround.
This is a priority, reallocate resources
from the chemistry division.
And they go back and forth in this logged simulated conversation.
And you mentioned they debate budgets.
This is the craziest part.
They debate simulated budgets.
They're fighting over fake money.
What?
To make the system realistic and to force prioritization,
Ramell gave them a token budget.
A certain number of API calls they can make per day.
And they fight over it constantly.
So Claude Code is trying to save tokens
for a big data processing job.
And accuses Claude bought the intern of wasting tokens
on frivolous web searches that aren't core to the mission.
That is incredible.
And then the regulator steps in.
The regulator steps in and says,
this argument is non-productive and is wasting cycles.
Realign on the primary objective.
It's a mediator.
That's incredible.
And also slightly disturbing, it implies
that friction, that conflict, is actually
a necessary component for intelligence.
That's the deep insight here.
A frictionless system is a dumb system.
It just follows orders.
You need the push and pull of different perspectives,
even artificial ones, to refine the output,
to find the optimal path.
They refine their processes through argument and iteration.
So after all the fighting, after the big ring
and the garage, what does this thing actually produce?
What's the output?
It creates a perpetual engine of discovery.
It runs 2047.
No sleep.
No coffee breaks.
No holidays.
No complaining about the benefits package.
It is resurrecting that dead science we talked about.
So it's finding those forgotten solar panel innovations.
And it's not just finding them.
It's cross-referencing them with 2026 material science
that it finds on the live web via Claudebot.
And then it prototypes minimum viable products.
It'll generate the chemical formula,
the manufacturing process, and even a draft
for a patent application.
So it's not just summarizing.
It's inventing.
It's synthesizing.
It's inventing by remembering.
It's creating the future by resurrecting the past.
Romail calls it the age of remembering.
This brings us to the cut and paste concept.
Because Romail isn't just showing off his cool garage project.
He's saying, this is a model for everyone.
This is the blueprint.
He calls it the cut and paste AI employee.
And this is the phrase that I think is going to define 2026
and beyond.
It's that significant.
So why cut and paste?
What does that mean in this context?
Because traditionally, if you wanted to deploy an AI agent,
you needed a team of ML ops engineers.
You needed to configure API keys, set up vector databases,
write Python wrappers, manage cloud instances.
It was engineering.
It was heavy lifting.
It required a specialized skill set and a lot of capital.
Exactly.
Romail is demonstrating that we are moving
to a text-based deployment.
You copy a prompt, a very complex structured prompt
that defines the anatomy of the worker, its personality,
its goals, its constraints.
The whole word church.
You copy that block of text.
You paste it into the context window
of a powerful model like cloud.
And instantly, you have the worker.
You have instantiated the employee.
You summon them with a keystroke.
You summon them.
Cloudbot isn't a specific piece of software
you download from an app store.
It's a class of employee you summon.
And Romail, brace down the metaphorical body of this employee
to help us understand what we're actually
pasting into that window.
Right, the eyes, ears, and hands.
I think this is a really useful framework for you,
the listener, to visualize what's happening.
Let's start with the eyes.
This is OCR optical character recognition and vision models.
But it's not the old school OCR that gave you a bunch of garbage
if the page was slightly crooked.
Or if there was a coffee stain on it.
Exactly.
We are talking about multi-modal LLMs that can look at a JPEG
of a handwritten lab note from 1978.
Understand the context of the coffee stain,
decipher the curse of handwriting, extract the chemical
formulas, and convert it all to structured JSON data.
So it handles the messy analog human world
and turns it into clean digital information.
It reads the stuff humans hate reading.
And it does it perfectly.
Then you have the ears.
The ears, what are those?
These are the API connections to the live web.
It's constantly listening.
It's ingesting podcasts in real time, reading news feeds,
monitoring stock tickers, tracking commodity prices.
It connects the dead data in the dumpster
to the live data of the world right now.
And that's what allows it to do things
like check the current price of lithium.
Precisely.
And finally, and maybe most importantly, the hands.
The hands.
This is where it gets real.
This is where it goes from thinking to doing.
The hands are the executable code.
This AI can write a Python script and then run it.
It can write a file to the hard drive.
It can send an email.
It can execute a trade on a financial market.
And looking forward.
And as we look at things like Tesla's Optimus Robot,
it can eventually control a physical robotic body.
The hands become literal hands.
So when you combine eyes to read the world,
ears to listen to it, and hands to act on it,
you have a complete worker.
You have a zero human company in a box ready to be deployed.
OK, so let's talk about the implications of that.
Because I'm listening to this.
And part of me is thinking, cool,
I can have a really smart research assistant.
But Romell is talking about something much, much bigger.
He's talking about the inversion of business as we know it.
This is the economic earthquake.
The traditional pyramid of business
is a few people at the top, the C-suite,
a layer of middle management, and a massive base of workers
doing the execution.
The classic top down corporate structure.
The ZHC model flips it completely.
It inverts the pyramid.
You have one human at the bottom, the solo creator,
the visionary supporting an infinite inverted pyramid
of AI workers above them.
The solo creator rivals the conglomerate.
Exactly.
If you are a freelancer, or a small business owner,
or an artist, you can now spin up a department
of 500 experts for the weekend.
You need a legal department to review a contract.
Cut and paste.
You need a logistics department to plan a product shipment.
Cut and paste.
The source material lists a bunch of examples,
but I want to pick three and really
drill down into what they look like.
Because it's easy to just say, health care.
But what does that mean on a Tuesday morning
for someone's job?
Let's do it.
Let's start with health care, specifically
medical research and administration.
Right now, if a hospital wants to review patient outcomes
for a rare disease over the last 50 years, what do they do?
They hire a team of junior researchers and medical students.
Right.
And they go down into the basement archive,
pull dusty physical files, and sit there
for months reading charts.
And it's incredibly slow, expensive,
and prone to human error.
You get tired.
You miss things.
You miss interpret handwriting.
Enter the ZHC.
The eyes of a ZHC can scan millions of pages
of those archives overnight.
The brain can then look for semantic patterns,
not just keywords, but deep correlations
between, say, a specific medication given in 1992
and a specific side effect that only appeared in patients
10 years later.
A correlation no human ever noticed,
because the data points were too far apart in time
and buried in too much noise.
Exactly.
The AI finds the cure or at least a powerful lead
in the archives.
And the human admin staff, the junior researchers
who used to do that manual work.
That job is displaced.
It's displaced, but the senior doctor, the lead researcher,
they are suddenly super powered.
They aren't spending 90% of their time searching for information.
They're spending 100% of their time
analyzing the insights the AI surfaces.
OK, let's look at legal.
This one feels especially ripe for disruption.
Oh, it's already happening.
The legal profession is, at its core, 90% text processing.
Semantic search is the game here.
A ZHC can comb through every single case precedent
in the history of the jurisdiction in seconds.
So the paralegal who spends 60 hours a week
in a data room doing discovery tagging documents.
That job is gone.
I don't want to sugarcoat it.
That specific set of tasks is gone.
The value of that labor approach is zero.
But the attorney who needs to build the strategy
based on that discovery, they're freed up.
They can now handle 10 times the caseload
or go much, much deeper on the strategy for a single case.
It automates the what so the human can focus on the why
and the how.
And finally, let's talk manufacturing.
This is where the hands really come into play.
This moves us from the digital to the physical.
Think about supply chain orchestration.
A ZHC doesn't just watch an inventory spreadsheet.
It talks directly to the IoT sensors
on the factory floor in real time.
So it's getting live data.
Live vibration data from a motor, for instance.
It can predict a machine failure three weeks before it happens
and automatically order the replacement part
to arrive the day before the scheduled maintenance.
It eliminates downtime.
It's not just managing, it's predicting and acting.
It effectively manages the physical world.
And you mentioned Tesla Optimus earlier.
You have to mention it because that's the next step.
Right now, the ZHC is mostly software and intelligence layer.
But robotics companies are building the physical chassis
for these AI employees.
Soon, Claudebot won't just be a process running on your screen.
It will be walking down the aisle of the warehouse,
picking up the box.
And this is where the excitement starts
to curdle a little bit into anxiety.
Yes, this is the turn.
We've talked about the milestone.
We've talked about the hero on his journey.
But now we have to talk about the ordeal.
We have to enter the innmost cave.
The dark night in the soul.
This is the heart of the matter.
Why does Romall use such heavy spiritual language?
Dark night of the soul.
He doesn't call it the economic adjustment period
or the great re-skilling.
Because it's not just about money.
If it were just about money,
we'd be talking about universal basic income and tax rates
and retraining programs.
This is about meaning.
It's about identity.
It's about identity.
For 5,000 years, really since the agricultural revolution,
human worth has been inextricably tied to human utility.
The basic equation of society was, if you don't work,
you don't eat.
But there's a deeper layer to it.
Which is?
If you don't work, who are you?
What do you do?
It's the first question we ask someone
when we meet them at a party.
It's how we define ourselves and others.
Exactly.
And now we are facing a reality where $100 laptop and a garage
can do your job.
And not just the boring parts, not just the repetitive parts,
the smart parts, the creative parts,
the parts you prided yourself on.
The canary in the coal mine has stopped singing.
That metaphor is starting to hit hard now.
That's the metaphor.
The ZHC works without us.
It argues with itself.
It improved itself.
It produces incredible value entirely
without human intervention.
We are looking at the potential for what
could be called societal suffocation.
Explain that phrase, societal suffocation.
It's the feeling of being crowded out of your own purpose.
The air gets thin.
Romail brings in a very deep cut here,
a really fascinating framework.
Julian James in the bicameral mind.
OK, I've tried to read James.
It is dense.
It's a heavy lift.
Can you break it down for the listener
who hasn't slogged through 500 pages
of psychological theory from the 70s?
The simplified version is this.
James argued that ancient humans,
think back to the era of Homer's Iliad,
didn't have consciousness the way we experience it.
They didn't have that internal monologue,
that little voice in your head that's narrating your life.
The voice I'm using to think about what you're saying right now.
That's the one.
James said they had a bicameral mind, two chambers.
One chamber gave orders, which they experienced
as auditory hallucinations, as the literal voices of gods
or kings or ancestors.
So they were hearing voices that told them what to do?
Yes.
And the other chamber of the mind simply obeyed.
There was no eye in the middle deciding things.
There was just the voice commanding and the body executing.
They were like biological robots following instructions from God.
In a way, yes.
And James argues that as society got more complex,
as writing was invented, that system broke down.
The voices went silent.
The gods left.
And humans were plunged into a terrifying chaos.
They had to invent consciousness.
They had to learn to decide for themselves.
And Rimmel thinks we are in a similar moment right now.
He argues that for modern humans, our jobs are the voices.
The alarm clock at 7 AM, the boss's email,
the quarterly goal, the deadline.
These are the external commands that structure our reality.
They tell us what to do.
They tell us we're useful.
They tell us we are good boys and girls.
They give us our daily bread and our daily purpose.
And the THC, it implies the great silence of those voices.
If the AI does the work, the external command structure dissolves.
The gods of employment are leaving.
And we're left alone in the silence.
And that silence is the dark night.
That silence is terrifying.
If nobody needs you to be at the office at 9 AM,
if your input is no longer required,
do you still get out of bed?
This brings up the very real risk of what
historian Yuval Noah Harari calls the useless class.
It's a brutal term, but it's one we have to confront.
We do.
The danger isn't just that people will be poor.
We can solve poverty with policy, with UBI.
The much deeper, more dangerous risk
is that people will be irrelevant.
We've seen previews of this, right?
Look at the rust belt in the US when the factory's closed.
It wasn't just an economic crisis of poverty.
It was a spiritual crisis.
It led to opioids, to depression,
to the complete collapse of community structure.
Now, imagine that on a global scale,
but for the white collar class, for the coders in Silicon Valley,
for the writers in New York, for the middle managers everywhere,
the enemy, that feeling of normalness
of being untethered from society's rules, could be devastating.
So we are in the cave.
It's dark.
The vocational voices are gone.
The robots are happily doing all the work without us.
How do we get out?
What's the path forward?
The first thing to realize is that you don't turn back.
You can't un-invent the CHC.
You can't put the genie back in the bottle.
You have to go through the cave to get to the other side.
Way out.
And this is where Ramell shifts from technologists
to philosopher.
He says, philosophy is our shield in this transition.
He points to three specific thinkers as guides.
Victor Frankl, Albert Camus, and Daniel Pink.
OK, let's take them one by one,
and let's apply them to the guy who just
lost his coding job to Claude Code.
Start with Victor Frankl.
Frankl survived the Nazi concentration camps.
He saw the ultimate stripping away
of human dignity and purpose.
And his conclusion in his book, man's search for meaning,
was simple, but profound.
You cannot control what happens to you.
You cannot control your circumstances.
You can only control your response.
So for our displaced coder.
The circumstances.
And AI writes, better, cleaner, faster code than me.
You can't control that fact.
But the response, that's where you have freedom,
the response could be despair.
Or it could be, my identity was never a coder.
My identity is a creative problem solver.
I used to use code as my tool.
And now I direct an AI as my new tool.
You separate your identity from your utility.
You find meaning in your attitude, not your output.
That's Frankl.
Then there's Albert Camus, the myth of Sisyphus.
The guy from Greek mythology pushing the bolder up the hill,
only to watch it roll back down forever.
The definition of a pointless task.
The ultimate futility.
Camus says the universe is absurd.
It's irrational.
It doesn't care about you or your work.
But then he says the famous line.
One must imagine Sisyphus happy.
How could Sisyphus possibly be happy?
His work has no meaning.
He's happy by owning the rock.
By finding joy and meaning in the struggle itself,
not in the outcome, he finds purpose in the exertion,
the feel of his muscles, the sun on his back.
The ZHC takes away the outcome.
The product is made by the bot.
But we can still find joy in the act of living,
the act of creating, even if it's not economically efficient.
We revolt against the absurdity of a world without work
by living passionately.
Yes.
And finally, Daniel Pink.
He's the bridge from high philosophy to practical application.
He talks about motivations, specifically autonomy, mastery,
and purpose.
Write the difference between extrinsic and intrinsic rewards.
Exactly.
The old world of work was almost entirely extrinsic.
I worked to get paid.
I worked to get the promotion to get the title.
The ZHC completely demolishes extrinsic value
because the marginal cost of labor hits zero.
The AI salary is zero.
So there's no money or status to chase anymore.
So we have to pivot to intrinsic motivation.
I do this because I have the autonomy
to choose my own project.
I do this because I want to achieve mastery of a skill
for its own sake.
I do this because it aligns with my purpose.
The ZHC actually enables this on a massive scale.
Which leads to this incredible concept,
this mental flip of de-skilling as a promotion.
This is a key reframing that Romela asks us to make.
Usually being de-skilled is a grave insult.
It's a demotion.
It implies you're becoming stupid or obsolete.
But look at it this way.
When photography was invented,
portrait painters were desilled.
They no longer needed to spend 10 years
learning the painstaking craft
of painting realistic hands and faces.
But did art die?
Well, it exploded.
It freed them to invent impressionism,
to invent abstract art,
to express emotion and ideas
rather than just recording physical reality.
They were promoted from technicians to artists.
So the coder who stops writing syntax
is de-skilled in the craft of typing,
but they are upskilled
into the art of architecture and design.
They move from being an executor
to being a synthesizer.
They become the director of the movie,
not the cameraman.
They set the vision.
And this leads us all the way back to the love equation.
Because if the machine provides
the hands, the eyes, and the ears,
what is left for the human to do?
The heart.
It sounds cheesy, but in this context,
it's the only variable left in the equation of value.
It's the only variable the machine
cannot authentically generate.
The machine can simulate empathy based on data,
but it cannot feel the weight of a decision.
It can't understand beauty.
It can't truly love.
Rimmler argues that our new job description
is governor of the heart.
Our job is to be more human.
Our job is to provide the ethical oversight.
We are the ones who decide what is worth building,
why it's worth building,
and how it should be built
in a way that benefits humanity.
So practically speaking,
if I'm listening to this right now
and I'm freaking out, what do I do tomorrow?
The source suggests some exercises.
This idea of shadow work.
Shadow work and abundance rituals.
It means practicing the future before it hits you.
Don't wait to be displaced.
Start living the post-work life now in small ways.
The first exercise is the vulnerability audit.
Yes, this is a tough one.
You have to sit down with a piece of paper.
Look at your job, be brutally honest.
Map your daily tasks against what a clawed bot can do.
I rate email responses to client increase.
The bot can do that better.
I organize the team's calendar.
The bot can do that instantly.
I calm down an angry irrational client
on the phone with genuine empathy.
The bot struggles with that.
The human connection is key.
Maybe, or maybe it's very good at it,
but the client wants and needs a human.
You have to identify those islands
of irreplaceable humanity in your work.
That is your raft.
And the other exercise,
simulating the post-work life.
This is harder than it sounds.
Take a Saturday, a whole day.
And the rule is, you cannot do anything productive.
Don't run errands.
Don't do chores.
Don't check work email.
Don't try to get ahead on anything.
Try to exist for 12 hours without obligation.
Most people I know, myself included,
would go stir crazy in about two hours.
That stir craziness is the withdrawal symptom.
It's your brain screaming for the vocational voices
to come back and tell it what to do.
You have to push through that discomfort
to find what Romail, borrowing from Joseph Chilton
Pierce calls the magical child.
The state of play.
The state of play, the state of wonder,
the state we were all in before society
told us that our value was equal to our grades
and then our salary.
The ZHC is a liberator.
It wants to take the drudgery
so you can go back to being the magical child
to pursuing what you are truly curious about.
So we can live in an age of abundance
where time, not money, is the ultimate currency.
Imagine a world where you don't sell your time to survive.
You invest your time to live.
That's the promise on the other side of the dark night.
But we have to get through the cave first.
We do.
And it won't be easy for anyone.
It's a fundamental rewiring of our individual
and collective identity.
So let's bring it home.
We've covered the garage, the zero human company,
the cut and paste employee.
We've been through the dark night.
Yeah.
What is the final takeaway for the learner listening right now?
The milestone is real.
This is not a drill.
Brian O'Remo has proven that the zero human company
is not science fiction.
It's a Linux script running on an old laptop.
The economic and technological barrier to entry has collapsed.
The cut and paste future is here now.
The tools are in your hands.
You can be the solo creator with a staff of 500 AI experts.
But the price of admission to this new world
is the psychological ordeal.
You have to be willing to go into the cave.
You have to let go of your old identity as a worker
and embrace your new identity as an architect of meaning.
We are the architects.
As we hand over the hands, the eyes,
and the ears of labor to the machines,
the only thing left for us to govern is the heart.
The question I want to leave you with is this,
are we ready to be the ethical architects
of a world where we no longer have to build anything?
What will we choose to create then?
That is the question.
We have 5,000 days, well, a little less now,
to figure out the answer.
The clock is definitely ticking.
Thank you for taking this deep dive with us.
I know it's a lot to process.
It's a little scary, but it's also,
I think, incredibly exciting and full of potential.
Stay aware, stay curious.
And we'll see you in the next deep dive.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.

ReadMultiplex.com Podcast.
