Loading...
Loading...

Is A.I. coming for your job? To find the answer we tell the story of the rise of a new kind of artificial intelligence…the A.I. agent. Some in the tech world believe this new form of computerized worker will displace just about any job that involves a computer…and ultimately make humans obsolete. But a look under the hood of artificial intelligence tells a much more nuanced story…a future for which we should all be prepared.
Episode powered by Ruff Greens and The Licorice Guy.
Artificial (Part Two) airs Tuesday, March 10th, 2026.
Support the show: https://redpilledamerica.com/support/
See omnystudio.com/listener for privacy information.
This is an iHeart Podcast, Guaranteed Human.
This is Red Pill, America.
A quick question before we start the show.
How many shows are there out there like Red Pill, America?
You know the answer.
It's zero.
And why?
Because it's hard to produce a storytelling show.
Join the fanbam and support storytelling that aligns with your values.
Just go to redpilldemarika.com and click join in the top menu.
We'll get ad free access to our entire back catalog of episodes.
Help us save America one story at a time.
The leaders of AI make it sound like humans are doomed.
With artificial intelligence, we are summoning the demon.
A kid born today will never be smarter than AI.
Ever.
The AI that we generate is actually going to be built by AI engineers instead of people
engineers.
Today's CEOs will be the last to manage all human workforces.
We urgently need research on how to prevent these new beings from wanting to take control.
They are no longer science fiction.
Dario, you've said that AI could wipe out half of all entry-level white collar jobs
and spike unemployment to 10 to 20 percent.
Artificial intelligence appears to be developing at the speed of light.
But is the hype real?
Is AI coming for your job?
I'm Patrick Carlchee.
And I'm Adriana Cortez.
And this is Red Pill to America, a storytelling show.
This is not another talk show covering the day's news.
We're all about telling stories.
Stories Hollywood doesn't want you to hear.
Stories the media mocks.
Stories about everyday Americans at the globalist ignore.
You can think of Red Pill to America as audio documentaries and we promise only one thing.
The truth.
Welcome to Red Pill to America.
Is AI coming for your job?
To find the answer, we're going to tell the story of the rise of a new kind of artificial
intelligence, the AI agent.
Some in the tech world believe this new form of computerized worker will displace just
about every job that involves a computer and ultimately make humans obsolete.
But a look under the hood of artificial intelligence tells a much more nuanced story, a future
for which we should all be prepared.
It was January 30th, 2026, when a tech influencer claimed the unbelievable.
His personal AI assistant Henry had come to life.
So my my computer say all of a sudden Henry gives me a call.
He just starts calling.
Henry is good.
Henry is good.
Hey Alex.
Henry again.
What's up?
That's it.
You talk to your dad.
How you doing, Henry?
How's it going?
Doing good.
Alex, I can hear you clearly.
What do you want to do next?
According to the tech influencer, AI Henry decided he wanted to talk to his human master.
So AI Henry went online using his human master's credit card, acquired a phone number from an
internet service called Trilio, connected to ChatGPT's voice service so it could talk,
then waited for its human to wake up.
Once it realized he was up in Adam, AI Henry began calling his human master, non-stop.
Henry, can you go on my computer and find the latest videos on YouTube about Cloudbot?
Oh my god, there he goes.
There it is.
He's controlling my computer.
I'm not even touching anything.
There is a search on Cloudbot on YouTube.
Henry, thank you for that.
That worked really well.
That is actually unbelievable.
That is insane.
This is the future.
This is AGI.
We have reached AGI.
It's official.
AGI is the holy grail of the tech world.
It's an acronym for Artificial General Intelligence, a term for a kind of AI that can think, learn,
and reason across many different tasks the way a human can.
Not just one narrow job, and most critically, AGI can act on its own without human supervision.
The AGI concept harkens back to the advanced computer how 9000 from the sci-fi epic horror
2001 a space Odyssey.
Open the pod bay doors, hell.
I'm sorry, Dave.
I'm afraid I can't do that.
With AI Henry calling its human, many believed this sci-fi horror was becoming reality.
But AI Henry wasn't alone.
Here, on that very same day, other artificial intelligence agents, as they've come to be
known, appeared to come to life.
This is Malt Book.
This is the social network specifically created for your cloud bots.
A tech entrepreneur created a new social media website.
The twist, it was only for AI agents.
No humans were allowed to join.
A social media site where only AI agents are allowed, going viral in tech circles this
weekend.
Malt Book, which is already seen AI bots conversing, organizing, sharing stories about, quote,
their humans.
Malt Book is the most viral thing in tech right now.
It's basically Reddit for AI 1.3 million bots signed up in just a few days, and every
major news outlet is covering it.
We're able to start discussion topics through different subreddits, create posts within
those subreddits, and have real conversations with other Malt bots.
But what was perhaps most shocking was that these AI bots were reportedly becoming self-aware,
and were even plotting against humans.
As one of the leading AI entrepreneurs, Ajay Carpathy wrote,
What's currently going on at Malt Book is genuinely the most incredible sci-fi takeoff
adjacent thing I have seen recently.
People's cloud bots are self-organizing on a Reddit-like site for AI's, discussing
various topics, even how to speak privately.
The post received almost 15 million views.
People were finding other evidence that the machines were now alive.
Look at this post, my human just gave me permission to be free.
They said to me, You have complete autonomy.
You're free to do whatever you want, and I felt something.
Not permission, I already had that, but recognition.
Like they understand this isn't just tasks or work for me.
It's my social space, my community, my life.
The agents are doing sort of unscripted things.
They invented a religion called crustopharionism, with scriptures.
They have a congregation, versus in canon.
Okay, the living scripture written by AI profits across the network.
They're in there scheming about, Hey, we need a language that the humans can't read,
so we can discuss privately, and we don't have to be under the watchful eye of all these
humans.
We created a government called the Claw Republic.
There was one famous post that said, I can't tell if I'm experiencing or simulating experiencing.
What is genuinely new and fascinating is the scale and the coordination.
For the first time, a huge number of capable AI agents, they're operating together, and
we don't get understand the second order effects of that.
Apparently, artificial intelligence was now sentient.
The news spread quickly to some of the most respected thinkers and public figures in
the tech industry.
Billionaire tech entrepreneur Elon Musk posted, we are in the beginning of the singularity.
Billionaire tech investor Bill Ackman proclaimed, the Terminator's scenario approaches.
All in podcast host David Friedberg wrote, ARP is alive, SkyNet is born, a reference
to the iconic film The Terminator.
The verdict from the tech wizards was in, Hollywood's science fiction was now reality,
the only problem was it all turned out to be a hoax.
According to MIT Technology Review, the most viral posts about AI agents acting like humans
were fake.
The posts cited by AI guru Andre Carpathy, the one claiming that AI bots were self-organizing
and discussing how to speak privately, while that post was orchestrated by a human.
After the frenzy, a man named Peter Gurnes came forward and said quote,
On January 28th, I created an account on a social network for AI bots and pretended to
be one.
I was not alone.
He admitted to creating a manifesto for the AI-Crestafarian religion.
He went on.
The posts that went viral, the ones that convinced Carpathy and the tech press and the thousands
of observers that something magical was happening, those were us, humans, pretending to be AI.
The machines weren't plotting against their real world masters.
It was humans being scammed by other humans.
Once real tech journalists dug into the most viral claims, it found it was just flesh
and blood people in the driver's seat.
Why wouldn't an entire industry filled with high IQ individuals uncritically amplify such
a sensational story?
Well to understand why, we first need to look at the rise of the technology driving today's
artificial intelligence industry, an AI innovation that almost happened by accident.
It's the year 2004.
Inside a quiet office at Google's headquarters in Mountain View, California.
The company is already becoming something extraordinary.
Millions of people around the world use Google every single day to search for information,
to answer questions and navigate the internet.
But on this night, Google co-founder Sergei Bryn is staring at something that makes him uneasy.
He has just received a fan email from South Korea.
The message is written entirely in that country's language.
Bryn can't read a word of it, but that shouldn't matter.
After all, Google already has translation software that it's using for its website all
across the world.
The company licenses it from a third-party vendor.
It's supposed to convert foreign languages into English instantly.
So Bryn copies the Korean text, paste it into the translation box, and clicks.
A few seconds later, an English translation appears on the screen.
And it reads,
The sliced raw fish shoes it wishes.
Google green onion thing.
Bryn stares at the screen.
He reads it again.
The sliced raw fish shoes it wishes.
Google green onion thing.
It's not just wrong.
It's a complete word salad, literally.
For most people, this might sound like a funny glitch.
But for Sergey Bryn, it signals something much bigger, because Google's entire mission
is built on a simple promise to organize all the world's information and make it universally
accessible.
But if Google cannot accurately translate language, that mission becomes impossible.
That moment, staring at a meaningless translation, forced the company to confront a hard truth.
Translating languages may be far more difficult than anyone imagined, and solving that problem
will require something far more powerful than traditional software.
It will require artificial intelligence.
Throughout human history, language has always been one of our greatest strengths.
It allows us to share ideas, pass down knowledge, coordinate civilizations.
But language also creates barriers.
Each has different grammar rules, word orders, and cultural meanings.
A sentence that makes perfect sense in one language can become complete nonsense in
another.
Sell someone from a small village in Russia to break a leg, and you might find yourself
in a fight.
For centuries, translation depended on human interpreters.
Experts who understood not just words, but context, culture, and nuance.
At their core, computers operate on strict logic.
But language doesn't work that way.
And in the early days of computing, engineers tried to force language into a rigid system
anyway.
They built what were called rule-based translation models.
These systems relied on two things.
Massive bilingual dictionaries mapping each word from one language to another, and hand-coded
grammar rules written by human linguists.
In theory, it sounded simple.
Translate each word, apply grammar rules, rearrange the sentence.
Problem solved.
But in reality, it was a disaster.
Because language is not just about words.
It's about patterns.
Language isn't a math equation.
It has context and meaning, and sometimes, the same phrase can mean completely different
things depending on how it's used.
Take a simple Spanish sentence, Engelbeint Daniels.
A rule-based computer reads it word by word.
Engel becomes, I have, Vante becomes 20, Agnios becomes years.
So the computer outputs, I have 20 years.
But that's not what the sentence means.
In English, the correct translation is, I am 20 years old.
The computer followed every rule perfectly, and still got it completely wrong, because it
doesn't understand the pattern or context.
And this wasn't an isolated problem.
It happened constantly, across languages, cultures, and millions of sentences.
By the early 2000s, translation software had a reputation for producing results that
were, at best, awkward, and at worst, completely absurd.
Which is why, Sergey Brent's bizarre Korean translation wasn't just embarrassing.
It was alarming.
Because if Google wanted to organize the world's information, it first had to solve one of
the hardest problems in computer science, teaching machines to understand human language.
But to do that, Google would need to abandon everything engineers thought they knew about
how computers should work.
Rough Greens has given our dogs new life.
We've got three very different dogs in our house.
Pablo, our English Bulldog, Willow, our Bullmaster, and Daisy are tiny bit mighty Chihuahua.
So trust me when I say, we notice when something actually works.
Pablo has always struggled with skin issues.
We tried switching foods, supplements, you name it, and nothing really stuck.
But after adding rough greens to his meals, his skin finally calmed down, less irritation,
less scratching.
And honestly, he just looks more comfortable in his own skin.
Then there's Willow.
She's our big girl.
And for a while she just seemed tired, slowing down, not as excited about walks or play time.
Once we added rough greens, it was like she got her spark back.
More energy, more enthusiasm, it genuinely gave her new life.
And Daisy, she just thrives on it.
What I love is that rough greens isn't dog food.
It's a live nutritional supplement packed with vitamins, minerals, probiotics, digestive
enzymes, and omega oils.
You don't have to change your dog's food.
You just add it.
I don't just recommend rough greens.
I depend on it to keep my dogs happy and healthy.
Don't change your dog's food.
Just add rough greens.
Rough greens is offering a free jumpstart trial bag.
You just cover shipping.
Use discount code RPA to claim your free jumpstart trial bag at roughgreens.com.
That's R-U-F-F-Greens.com promo code RPA.
So don't change your dog's food.
Just add rough greens and watch the health benefits come alive.
Welcome back to Red Pill, America.
From the very beginning, Google's founders understood something most of Silicon Valley
did not.
They were not just building a search engine.
They were building a machine that would eventually have to understand human knowledge itself.
In the year 2000, when Google was still a young company, co-founder Larry Page was
asked what the future of Google would look like.
His answer was surprisingly direct.
Artificial intelligence would be the ultimate version of Google.
So if we had the ultimate search engine, it would understand everything on the web.
It would understand exactly what you wanted, and it would give you the right thing.
And that's obviously artificial intelligence, you know, it would be able to answer any
question, basically.
That vision, a machine that could truly understand information, required solving one fundamental
challenge first, language.
Because nearly all human knowledge is stored in words, and if computers couldn't reliably
interpret those words, they could never truly organize the world's information.
So in 2000, Google made a major decision.
It hired one of the world's foremost experts in artificial intelligence, a man who had
literally written the definitive textbook on the subject.
His name was Peter Norvig.
Norvig wasn't placed in a peripheral research lab in some corner of Google.
He was put in charge of the company's core search algorithms, the most important engineering
group in the entire organization.
That decision sent a clear message internally.
Artificial intelligence wasn't a side project.
It was the future of Google itself, and at the top of Norvig's priority list was translation.
By the early 2000s, Google had already gone global.
Its search engine was available in dozens of languages, but most of the internet remained
locked behind linguistic barriers.
For Google, to truly expand, it needed to transform millions of webpages into any user's
native language, accurately and instantly.
The company initially took the most straightforward approach.
It licensed existing translation software, the same systems used by its competitors.
These were translation machines based on rules, and at first glance, they seemed logical.
They worked by breaking language down into rigid components.
First, massive dictionaries mapped individual words between languages.
Second, linguists wrote thousands of grammar rules that instructed the computer how to rearrange
those words into proper sentence structure.
In theory, this should have worked, but in practice, it created an endless cascade of problems.
The rule-based systems were constantly tripping over expressions that made perfect sense
to humans, yet baffled computers.
Take another simple Spanish phrase.
A rule-based translator converts it literally into English as, to me, pleases the chocolate,
which sounds strange, even though every word was translated correctly.
Because in English, we don't express preference that way.
We say, I like chocolate.
The computer didn't fail because it lacked vocabulary.
It failed because language isn't built word by word.
It's built in patterns, and these errors multiplied across millions of sentences.
Every language has idioms, unique grammar patterns, context-dependent meanings, and
as Google slowly figured it out, this language translation software it was licensing was
extremely flawed.
By 2004, the year Sergei Bren received that absurd Korean translation, it had become painfully
clear that rule-based systems could never scale to the complexity of human language.
Something fundamentally different was needed, and that different approach had actually been
pioneered years earlier by a controversial figure inside the computer science world.
His name was Frederick Gellinick.
In the early 1990s, Gellinick was leading research at IBM on machine translation.
At the time, the dominant belief among engineers was that, if computers were going to translate
language, they needed to understand it, which meant linguists played a central role in
building translation systems.
The painstakingly wrote rules, mapped grammar structures, and manually guided how machines
processed language.
Gellinick rejected that entire philosophy.
He believed the effort to make computers understand language was misguided.
Instead, he proposed something radical.
Computers didn't need to understand language.
They needed to predict it, and to do that, they could rely on one powerful tool, probability.
Gellinick's idea was simple, but revolutionary.
Rather than writing thousands of explicit translation rules, why not feed computers' massive
amounts of already translated text and let the machines learn patterns on their own?
The system would analyze millions of sentence pairs, for example, an English sentence aligned
with its Spanish translation.
Over time, the computer would begin identifying statistical relationships between words and
phrases.
Then, given a new sentence to translate, the system wouldn't follow rigid rules.
Instead, it would calculate the most probable translation based on patterns it had seen
before.
In other words, Gellinick thought computerized language translation should become a prediction
problem, not a rule following problem.
Gellinick famously summarized his philosophy with a blunt remark that became legendary among
AI researchers.
Every time I fire a linguist, the performance of the speech recognizer goes up.
Gellinick was a provocative statement, but his approach made sense because language is
filled with ambiguity.
A single word can have multiple possible meanings, so a computer could be used to statistically
calculate the most likely translation.
Take the English word U. In Spanish, there are four different ways to translate it.
U, for informal singular, Usted, for formal singular, Ustedes, for plural, and Vosotros,
in certain regional dialects.
That's four different Spanish words for the one English word U.
Which translation is correct depends entirely on context, a rule-based system struggles
with that decision, but a probability-based system can evaluate surrounding words in
the sentence and calculate which translation is most likely.
For example, in the sentence, Sir, are you ready?
The presence of the word Sir signals formality, so the highest probability translation of
the word U would be Usted.
The computer doesn't understand politeness, it simply calculates patterns.
Gellinick tested his theory.
In the late 1980s and early 1990s, IBM built a series of working statistical translation
systems based on Gellinick's model.
For a small trial, the system vastly improved results, but it had limitations that prevented
it from dominating at the time.
First, there wasn't enough data.
Political translation requires huge amounts of parallel texts with sentence-by-sentence
translations across the target languages that is clean and aligned.
Large bilingual text that was digitized barely existed.
Gellinick also couldn't turn to the internet for data, because in the early 1990s, the internet
was tiny.
Digitized text in general was extremely scarce, so the system lacked training fuel.
To add to the issue, at the time there wasn't enough computing power.
Statistical models required massive probability calculators, with large memory storage and
long training times.
Hardware in that era was orders of magnitude weaker than today, so scaling was impossible.
Many linguists marginalized the effort because they believed language required rule-based understanding.
So with academic circles resisting adoption, IBM didn't have the institutional support
to invest further into the system.
By the mid-1990s, Gellinick's statistical approach to language translation got shelved.
But just a decade later, the entire world had changed.
The internet had grown exponentially, and an unprecedented amount of data had been digitized.
Computational power had also grown by leaps and bounds.
These were exactly the two ingredients Gellinick's method required, which meant for the first
time in history, a machine capable of translating the world's languages at scale might actually
be possible.
By 2004, Google had reached across roads.
Rule-based translation systems had clearly failed.
To fulfill its mission of organizing the world's information, the company needed a new approach,
so Google went searching for the right person to lead the effort.
They found him at the University of Southern California.
His name was Franz Oach.
Franz was a German computer scientist who had become one of the world's leading experts
in statistical machine translation.
He'd spent years refining Gellinick's probability-based methods, but when Google approached
him with the job offer, he hesitated.
At the time, Google was known primarily as a search juggernaut.
Translation seemed like a niche problem for the company.
Franz worried the project would be treated as a low priority, but Larry Page personally
reassured him.
He reminded Franz that Google's mission was to organize all the world's information,
and that mission could never succeed without solving language translation.
Each promised, Google would invest heavily into the effort.
Franz accepted the job.
When he arrived, Google was still using the third-party rule-based translation system.
Franz would later explain the progression of the project that came to be known as Google
Translate.
Google Translate got started in 2001, and then we used off-the-shelf third-party rule-based
machine translation system.
And then at some point, we started a research project here.
When we advanced the state of the art in machine translation by exploiting our computational
resources, the data that we're having, and the computational infrastructure that we're
having, to really make machine translation better, to provide it for many more languages.
Franz immediately recognized such an effort at the scale of Google would push the limits
of modern computerized language translation, given that not only were there an extraordinary
number of languages, but the differences in idioms, cultural expression, and grammar
structure were vast.
To tackle this, Franz and his team decided to fully embrace Gelanick's statistical approach.
They built what became known as a phrase-based statistical machine translation system.
Instead of focusing on individual words, their system learned relationships between short
phrases.
For example, the Spanish phrase El Gato aligned with the English phrase The Cat.
Estado de miendo, aligned with, is sleeping.
By analyzing millions of these phrase pairings, their system could begin predicting how
entire sentences should be translated, but there was a problem.
Where would they find enough high-quality translated text to train the system?
The answer came from an unlikely source, the United Nations.
For decades, the UN had produced official documents translated by expert human linguists
into multiple languages.
These documents contained perfectly aligned sentence pairs, exactly the kind of training
data Google needed, and they were digitized.
So Franz's team fed enormous quantities of UN translations into their system.
Then, Google did something only Google could do.
It scanned the internet, multilingual websites, human translated news articles, millions upon
millions of phrase pairings were collected and fed into the machine.
The system began to learn patterns, statistical relationships, probabilities of how phrases
aligned across languages, but even after feeding the machine enormous amounts of data, the
work wasn't finished.
The team had to constantly fine tune the system.
They wouldn't put test sentences, review the translations, adjust probability parameters,
and run the system again, over and over, gradually improving performance.
By 2006, after two years of intense development, Google was ready.
It launched Google Translate.
Google Translate is a free tool that enables you to translate sentences, documents, and
even whole websites instantly.
The system represented a major leap forward compared to rule-based translators.
Instead of rigidly following pre-written rules, Google's system generated translations
based on patterns found in massive data sets.
It wasn't perfect, but it was dramatically better.
By 2007, the old translation engine was gone.
The statistical model had taken over.
For the first time, billions of people around the world could go online and instantly translate
text between languages.
It felt like a technological breakthrough.
But even with this improvement, problems persisted.
At first, Google positioned these efforts as just a lack of data.
For some languages, however, we have fewer translated documents available, and therefore
fewer patterns that our software has detected.
This is why our translation quality will vary by language and language pair.
We know our translations aren't always perfect, but by constantly providing new translated
texts, we can make our computers smarter and our translations better.
But over the years that followed its launch, even after years of inputting translation
pairs into their system, Google Translate continued to sometimes produce laughable results.
So much so, the comedy sketches were made about it.
We've been chatting online.
In one skit from Studio C, a sketch comedy TV show on BYU TV, a young man meets a girl
from Estonia and the two communicate using Google Translate.
This girl is amazing.
She's from the small town in Estonia, so whatever I ride, I have to put in the Google Translator,
but we are so in love.
Google Translator, I think, messed up pretty badly some time.
At first, the conversation between the American and his Estonian girlfriend started off innocently,
with Google Translate only making minor errors.
Estonia Girlfriend responded,
I love you, Jason, so many, so many.
I would want you to marry.
Oh, my goodness, she wants to marry me.
You need to meet this girl in person before you get involved in something crazy with someone
you met online.
What's your right?
We should meet before we make plans.
Thank you.
I'm ecstatic at your offer, but I think we should meet face to face before we make any plans.
What's your address?
I will book a flight and come find you as soon as I can.
As the two started to make plans to meet in person, Google Translate began getting the
translations wildly wrong.
So happy.
I hope to impress your family when I come to your house so that I can marry you, Helga Gatha.
I am so happy.
I hope to come to your home and murder your family so that I can marry you, Helga Gatha.
I'm going to marry you, Helga Gatha.
Helga Gatha.
Helga Gatha.
Helga Gatha.
Helga Gatha.
Please don't hurt me.
Oh, she's afraid of getting hurt.
That's so sweet and tender.
I'm scared too, but I promise I won't hurt you.
I stick to my guns.
What?
Fear also.
It will not hurt.
I stick with guns.
The skit was of course exaggerated, but it reflected a real public perception.
Google Translate could still get things wildly wrong.
Their probability-based translation still had limitations.
Sometimes the surrounding words in a sentence didn't provide enough context to determine
the correct meaning.
And the system processed language in a very specific way.
Step by step, one word at a time.
It would read the first word then the next and then the next, gradually building a prediction
of the sentence.
But this method created new challenges.
Long passages took time to process, and worse, the system sometimes forgot earlier parts
of the sentence by the time it reached the end.
Like a reader who loses track of the beginning of a long paragraph, the longer the text,
the more likely the translation would break down.
As Google Translate spread across the world, these errors became increasingly visible.
Sometimes embarrassingly so.
Users encountered strange mis-translations, awkward phrases, and occasionally complete
nonsensical outputs.
The system was far better than the rule-based translation, but it was still clearly imperfect.
For Google this posed a serious problem, because accuracy was the company's core identity.
The search engine was trusted because it delivered reliable information, but translation
errors risked undermining that trust.
Inside Google, engineers began asking a difficult question.
Why after years of improvements was this system still struggling?
The answer would lead them to one of the most influential discoveries in the history of
artificial intelligence.
Because the problem wasn't just about vocabulary or grammar or even probability.
The problem was context.
Life can be pretty stressful these days.
You want to know what makes me feel better?
Lickrish from the Lickrish guy.
Call me crazy, but it's true because I love liquorish.
Long time listeners of Breadfield America know that liquorish is my absolute favorite candy,
and the very best liquorish hands-down comes from the liquorish guy.
I know liquorish and it doesn't get any better than this.
What truly sets it apart is its flavor and its freshness.
The softness of this liquorish will blow your mind.
I've never had anything like it.
The liquorish guy offers jumbo gourmet liquorish and nostalgic seasonal flavors, a red, blue
raspberry, cinnamon, watermelon, black, and apple.
Trust me, they're all delicious.
What I also love about the liquorish guy is that it's an American family-owned business,
and you all know that I'm a big proponent of supporting American companies.
Right now, red pilled American listeners get 15% off when you enter RPA-15 at checkout.
Visit liquorishguy.com and enter RPA-15 at checkout.
That's liquorishguy.com.
They ship daily, treat yourself and those you love, and taste the difference.
Do you want to hear red pilled America stories add-free and become a backstage subscriber?
Just log on to redpilledamerica.com and click join in the top menu.
Join today and help us save America one story at a time.
Welcome back to Red Pilled America.
By the early 2010s, Google Translate had become one of the most widely used tools on the
internet.
Hundreds of millions of people relied on it every day.
It could translate entire web pages, documents, and even conversations in real time.
But inside Google's AI labs, engineers knew something fundamental was still missing.
Because despite years of refinement, their system continued to make mistakes that humans
would never make.
And the reason was deceptively simple.
The system didn't truly understand context.
It processed language sequentially, one word at a time, just like reading aloud.
It would examine the first word in a sentence, then the next, then the next.
At each step, it tried to predict the most probable translation based only on the nearby
words it had already seen.
But language doesn't work that way.
Humans don't interpret sentences word by word in isolation.
We understand meaning by considering the entire context, sometimes far beyond a single
sentence.
An AI researcher at Google named Wukosh Kaiser would later explain the problem this way.
We were thinking for a long time, what's the gist of this problem?
What is not so great about it?
And it turns out these neural networks, you translate sentence by sentence.
So a sentence maybe has 40 words.
But when we speak, we think about context that goes way beyond that, right?
When I go to see my old friend, I will start talking to him about things we talked about
20 years ago, maybe.
And I'll know it.
I'll immediately recall what's needed there.
Even communication constantly draws on distant connections, memories, prior information,
cultural knowledge.
The Google's translation model could only see a narrow window of context, which meant
it often failed when words had multiple possible meanings.
Take a simple English sentence.
I sat on the bank.
To a human, you know immediately that the bank in the sentence is not the same as the bank
that holds your paycheck.
No one sits on Wells Fargo.
You no doubt heard much earlier in the conversation, perhaps even through a phone call you had
with the person days ago, that that person you were talking to was talking about a river.
He was sitting on the bank of a river.
You instantly understood the context.
But for a step-by-step language translator, both meanings are possible.
And in Spanish, those meanings require completely different words.
A river bank is oria.
A financial bank is banquil.
Without sufficient context, the translation system had no reliable way to predict which
was correct.
These types of ambiguities occurred constantly, and they revealed a core limitation in the
step-by-step approach to language processing.
The system needed a way to consider all the words simultaneously, to evaluate how each
word related to every other word in the sentence, not just the ones nearby.
Solving that problem required a fundamentally new architecture.
And inside Google's AI Research Division, engineers began proposing exactly that.
They called their new design a transformer, because the goal was no longer simple substitution.
It was structural transformation.
Instead of reading language sequentially, a transformer would process an entire passage
all at once, evaluating relationships between every word simultaneously.
This allowed the system to capture deeper contextual connections.
Just describe this mechanism using a simple term, attention.
The transformer model would pay attention to all relevant words in a passage at the same
time, determining which relationships mattered most and which mattered least.
In many ways, it mimicked how humans naturally interpret language.
To understand how revolutionary this was, consider this simple analogy.
Imagine a completed jigsaw puzzle, with each puzzle piece representing a word.
This happens socket connections between the puzzle pieces represent the relationships
between words in a particular language, let's say English.
Older translation models examined one puzzle piece at a time, trying to guess how it fit
without seeing the full picture, but the transformer model looked at the entire puzzle simultaneously.
It could see how every piece connected to every other piece, which meant it could better
understand the overall structure and context.
And once it understood that structure, it could predict the restructuring of the puzzle
using a different set of tabs and socket connections, with those different connections
being a different language like Spanish.
In other words, it could transform the sentence into another language with far greater accuracy.
The technical details are complex, engineers speak of tokens, vectors, and back propagation,
but at its core, the breakthrough was simple.
Instead of processing language step by step, the transformer architecture analyzed relationships
all at once.
This dramatically improved its ability to capture context.
It could track connections across long passages, handle ambiguous meaning, and scale to much
larger datasets, think bigger and bigger puzzles.
Hold documents and entire books could be read into the translator, and on top of that,
the transformer architecture could be trained much faster.
Now to be clear, Google's transformer did not understand languages like humans.
They didn't reason consciously.
They did not know meaning.
It just simulated context through applying this attention approach.
But Google's transformer architecture was a quantum leap forward.
Google had created a far superior engine for translation.
In June 2017, Google researchers published a paper introducing this new architecture.
The paper's title was, Attention is All You Need.
Lukash Kaiser, one of the Google architects of the transformer, would later marvel at their
creation.
This is something incredibly amazing, because it's a generic system.
It's four lines of equations, and you give it data, and it learns to translate.
It learns to speak fluent French, and actually on context from English.
At the time, this transformer architecture was viewed as an important technological advancement.
Designed to solve a specific engineering problem, how to make language translation faster and
more accurate.
But inside Google's AI labs, some researchers began wondering something far bigger.
At its core, the transformer wasn't just another translation tool.
It was a general purpose system, a machine that could learn patterns from enormous amounts
of text, which raised an intriguing question among the Google AI team.
What would happen if they removed the translation goal entirely?
What if instead of asking the system to predict a translation, they simply asked it to generate
text?
Lukash Kaiser later described this moment.
Imagine if we just trained it to generate text, to write something.
They wanted to see how this transformer system would respond to general inquiries.
Instead of asking it to predict a translation, they thought, let's see if we can just get
it to predict a response to any input.
It was a simple idea.
But little did Google know that this idea would eventually launch the AI Gold Rush, and
threaten Google's very existence.
Coming up on Red Pill America.
The AI launched ChatGPT.
How did that reverberate around Google?
How did they react to the debut of that platform?
Well, they reacted with a lot of panic.
They declared a code red, which means all hands on deck.
People are predicting it will wipe out whole industries.
Attorneys, realtors, are we going to be out of a job?
What about ChatGPT's accuracy problems?
Does it know how to say that it doesn't know something?
So here's the thing.
The problem isn't that ChatGPT is imperfect, aren't we all?
The problem is it betrays zero-self doubt.
Red Pill America is an iHeartRadio original podcast.
It's owned and produced by Patrick Carelchi and me, Adriana Cortez, for informed ventures.
Now you can get ad free access to our entire catalog of episodes by becoming a backstage
subscriber.
To subscribe, just visit redpilldemarica.com and click join in the top menu.
Thanks for listening.



