Loading...
Loading...

Subscribe at Apple to listen ADS-FREE: https://podcasts.apple.com/us/podcast/ai-daily-news-rundown-openai-kills-sora-for-spud-apples/id1684415169?i=1000757352234
🚀 Welcome to AI Unraveled. Today, the tech giants execute a massive strategic pivot. OpenAI axes its flagship video generator to focus on robotic world simulation, Apple prepares a total overhaul of Siri for iOS 27, and the US government locks down the hardware stack.
This episode is made possible by our sponsor:
🎙️ DjamgaMind: High-Fidelity Intelligence for the C-Suite. If you are a modern decision-maker, DjamgaMind delivers strategic audio forensics in Healthcare, Energy, and Finance. Stop reading headlines and start understanding the systemic impact with our human-verified, technical-grade analysis. 👉 Explore the Forensics: https://DjamgaMind.com/regulations
🎧 Listen Ads-Free: Tired of interruptions? Subscribe to AI Unraveled directly on Apple Podcasts to enjoy all our daily episodes completely ads-FREE!
🛑 AIRIA: With Anthropic’s new "Dispatch" feature taking remote control of your macOS desktop, security is no longer optional. AIRIA provides the enterprise-grade sandboxing required to run these autonomous remote agents safely, ensuring your corporate environment is protected from multi-turn adversarial attacks. 👉 Govern your agents: https://airia.com/request-demo/?utm_source=AI+Unraveled+&utm_medium=Podcast&utm_campaign=Q1+2026
In Today’s Briefing:
Strategic Signal: The Pivot to Utility and Execution. Credits: Created and produced by Etienne Noumen.
Keywords: OpenAI Sora Shutdown, OpenAI Spud Model, Apple iOS 27 Ask Siri, Cloudflare Dynamic Workers, Roche AI Factory Blackwell, Microsoft Superintelligence Ali Farhadi, FCC Router Ban, Agentic Enterprise Security, World Simulation Robotics, DjamgaMind, AI Unraveled.
🔗 RESOURCES & CAREERS
Find AI Jobs (Mercor): Apply Here - https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1
⚗️ PRODUCTION NOTE: We Practice What We Preach.
AI Unraveled is produced using a hybrid "Human-in-the-Loop" workflow.
Tyler Reddick here from 2311 Racing.
Another checkered flag for the books.
Time to celebrate with Jamba.
Jump in at JambaCasino.com.
Let's Jamba.
No purchase necessary, BTW Group.
Boyd we're prohibited by law, CCNC, 21 Plus,
sponsored by JambaCasino.
Capital One's tech team isn't just talking
about multi-agentic AI.
They already deployed one.
It's called chat-concierge, and it's simplifying
car shopping, using self-reflection
and layered reasoning with live API checks.
It doesn't just help buyers find a car they love.
It helps schedule a test drive, get pre-approved for financing,
and estimate trading value, advanced, intuitive,
and deployed.
That's how they stack.
That's technology at Capital One.
Welcome to AI Unraveled, your daily strategic briefing.
It is Wednesday, March 25, 2026.
I'm your co-host, Anna.
Today's episode is brought to you by Jamba Mind.
If you need high fidelity intelligence
built for the C-suite, Jamba Mind delivers
human-verified strategic audio forensics
across health care, energy, and finance.
Check out jambamind.com for technical grade analysis,
and a quick reminder for our listeners.
You can now enjoy all episodes of AI Unraveled completely
ads-free by subscribing directly on Apple Podcasts.
Today is the day the industry stopped
playing with generative toys.
Open AI has officially killed Sora sacrificing
its viral video generator to free up
compute for a new robotics world simulator called Spud.
Meanwhile, Apple is totally overhauling
Siri for iOS 27, turning it into a context-aware chatbot
that can execute tasks inside your apps.
We're also covering Cloudflare's new dynamic execution
environments for AI agents, Rocha's massive new 3,500 GPU
pharma factory, and the FCC banning all foreign-made
routers.
Let's get into the news.
So think about the fundamental difference
between a Hollywood render farm, churning out
a massive CGI bridge for some summer blockbuster,
and an actual structural engineering firm
calculating the load-bearing tension
of a real-world suspension bridge.
Right, it's a completely different universe of physics.
Exactly.
One is this highly sophisticated parlor trick
of light and shadow and pixel geometry
is designed specifically just to fool your optic nerve.
But the other one has to obey the actual unforgiving
mathematical laws of physics and gravity
and material science.
Because if the CGI bridge fails,
you just get a weird rendering glitch on the screen.
Yeah, somebody spots it on Twitter and laughs.
But if the real bridge fails, the structural integrity
collapsed as people die.
The entire physical system just falls apart.
And that distinction, that exact gap between illusion
and reality, is basically the massive fault line
running right through the entire artificial intelligence
industry this morning.
Which is exactly why we are here today.
Welcome to today's deep dive.
We are incredibly glad to have you here with us.
You are the third mind in the room today.
And man, we have a monumental stack of intelligence
to unpack for you.
It is a heavy stack today.
Pure signal, no noise.
Right.
We're looking at a huge wave of news
that just dropped on March 25th, 2026.
And our mission today is to perform a high stakes,
purely forensic and highly technical autopsy
of what we're officially calling the pivot to utility.
I like that framing, the pivot to utility,
because it really captures the violence of what's happening.
It really does, because the era of generative toys,
the AI that hallucinates these photorealistic videos
or writes cute little rhyming poems
or paint cinematic pictures of cowboys on the moon,
that area is officially dead.
Dead in my need.
Yeah, is being violently sacrificed.
And in its place, we are seeing this ruthless industry-wide
pivot toward raw execution, impendable enterprise security,
and a complete mad dash for hardware sovereignty.
It is a very sobering, frankly,
multi-billion dollar realization for the entire tech sector.
I mean, the core thesis that unites
every single piece of source material we are examining today
is this, the industry has finally hit the wall.
The wall of generation, right?
Exactly.
They've realized that generating digital content
is fundamentally cheap.
It's just a commodity now.
It's a complete race to the bottom on pricing.
Yeah, everybody has a text generator.
Right, but executing actual tasks,
whether that means a physical bipedal robot,
navigating a chaotic warehouse floor
without accidentally crushing a human worker,
or a digital agent safely writing custom code
to wire millions of dollars between international bank accounts,
that requires an entirely different,
astronomically more complex infrastructure.
The illusionists are basically being shown the door.
Yeah, the show is over.
The infrastructure architects and the forensic engineers
are taking over the building.
So here is our roadmap for you today.
We're going to deconstruct the stunning news
that OpenAI is completely killing off
its biggest consumer hit, which is still hard to process.
It is.
From there, we will analyze Apple's terrifyingly capable,
new execution layer that fundamentally
breaks how we actually use our mobile devices.
And ruins the business model for basically every app developer.
Completely ruins it.
Then we will uncover the exact technical reasons
why C-suite executives are currently
in a state of absolute sheer panic
regarding agentic AI security.
And finally, we'll look at the massive, high stakes,
geopolitical scramble to physically lock down AI hardware.
So let's jump right into the deep end of this pivot.
Let's talk about the death of the toy era
and the rise of physical intelligence.
OpenAI is officially winding down Sora.
I have to admit, when I first read the memos on this,
my initial reaction was just complete confusion.
You weren't the only one.
It purely from a consumer product standpoint.
It's staggering.
I mean, Sora was the number one app on the app store.
Number one, it completely dominated
the cultural zeitgeist.
Everyone was generating these crazy videos.
Right.
And they are completely winding it down across all mobile apps
and permanently shutting down its API access.
It's just, it's over.
This was the shiny object the entire world was staring at.
But according to the internal reports, OpenAI staff
were explicitly calling Sora a, quote, drag on resources.
A compute drag.
Yeah.
The CEO of OpenAI's applications, Fiji Simo,
literally told staff weeks ago to stop chasing
what she called side quests.
Side quests, that is such a brutal way
to describe your flagship consumer product.
And because of this total shutdown,
there was this massive $1 billion partnership with Disney,
where Disney was going to let its highly protected
intellectual property be used inside Sora for media generation.
And that deal is now completely dead on the table.
Which tells you absolutely everything
you need to know about where the actual enterprise
value lies in this decade.
A billion dollars in guaranteed revenue
from the biggest entertainment conglomerate on Earth
is now officially considered a trivial distraction.
Literally just a side quest.
But I need to really pressure test this logic with you
because I'm struggling to process the engineering economics here.
Sure.
Lay it out.
I look at Sora.
And to me, it's this incredibly sophisticated,
mathematically beautiful system.
The lighting is perfect.
The reflections in the puddles are totally accurate.
Visually, it's a masterpiece.
Right.
But I guess you can't actually do anything with it.
Yeah.
Are we basically saying that burning hundreds of millions of dollars
in GPU compute to create a stunning photo-realistic video
of a cat walking on the surface of Mars
is fundamentally a waste of silicon.
If the underlying model can't figure out how to fold my laundry.
That is exactly what we are saying.
So video generation is just what they are calling a compute drag.
It is the ultimate compute drag.
And let's really deconstruct that specific term
because compute drag sounds like a generic corporate buzzword, right?
Yeah, it sounds like middle management speak.
But it is actually a highly specific technical diagnosis
of a failure state in machine learning architecture.
We have to look at the mathematical difference
between what video generation is doing
and what real world execution requires.
OK, break that down for me.
So when an autoregressive transformer model like Sora
generates that video of a cat on Mars,
it is not actually simulating physics at all.
It's just drawing pictures.
It is playing a massive statistical guessing
game based on latent space representations.
It's looking at millions of past frames of training data
and predicting pixel by pixel what the next frame should
look like to successfully fool the human visual cortex.
Right.
So it knows what a cat looks like in a 2D frame.
But it doesn't actually know what it called it.
Exactly.
It doesn't understand mass.
It doesn't understand gravity or torque or friction
or object permanence.
The physical properties are just, they're missing.
Entirely missing.
If the generated cat walks behind a generated rock,
the AI doesn't mathematically keep track
of the cat's physical volume behind the rock.
It just deletes the cat pixels and hallucinates new ones
when it statistically assumes the cat should pop out
on the other side.
Oh, wow.
So it really is just a rendering engine
masquerading its intelligence.
That's all it is.
But wait, let me push back on that a little bit.
Isn't visual processing the first mandatory step
to spatial awareness?
How do you mean?
Well, I mean, human beings learn about physics
and object permanence primarily by watching things
interact in our visual field, right?
A baby learns gravity by dropping a wooden block
and watching it fall to the floor.
Right.
Observation.
So how can a robot understand the physical world
if you kill the vision generation model?
Aren't perception and generation intrinsically linked?
That is the exact architectural fallacy
that led to the dead end of generative toys
in the first place.
Really?
A fallacy?
Yes, because perception and generation
are inverse mathematical functions.
OK, explain that.
Recognizing a coffee cup on a table
so you can pick it up requires visual processing, yes.
But generating a photo realistic video of a coffee cup
from scratch requires orders of magnitude more compute.
Because you're having to invent the lighting,
the background, the reflections.
Exactly.
You're rendering unnecessary pixels.
And none of that extra compute helps you understand
how hard to actually squeeze the ceramic cup when you grab it.
Ah, I see.
Let's look at what Bill Peoples, the head of the Sora team
stated today.
He said the team is pivoting entirely away
from video generation and moving exclusively
toward world simulation for robotics.
World simulation.
Yes.
The prize in his exact words is automating the physical economy.
World simulation is the exact opposite of pixel-design.
So to simulate the world, the AI has
to actually mathematically map the physics engine of reality.
Yes.
If you put a robotic arm in a kitchen,
it cannot just guess the next pixel.
If a world simulation model guesses wrong
about the friction coefficient of a ceramic mug,
the robotic arm crushes the coffee cup.
It shatters the plate.
Or it breaks a human user's finger.
The compute required to just make a pretty video
is a massive drag because it's burning highly
constrained, incredibly expensive GPU processing power
to produce something that has absolutely zero utility
in the physical world.
That makes total sense.
So open AI is freeing up all that compute, literally pulling
all those chips off of Sora for their new foundational model,
which they are calling Spud.
And Sam Altman explicitly says Spud is
meant to, quote, really accelerate the economy.
And let's be intellectually honest here with ourselves.
You don't accelerate the macroeconomic landscape
with Disney cutscenes.
No, you definitely do not.
You accelerate it by replacing human physical labor.
Precisely.
And that is exactly why we have to synthesize
this open AI pivot with the other massive piece
of intelligence in this sector today.
The Amazon acquisition.
Yes.
At the exact same time, open AI is killing Sora
to focus on physical robotics.
Amazon has officially acquired fauna robotics.
Well, let's look at fauna because they were founded just
back in 2024 by a team of elite former meta and Google
engineers.
Amazon just bought them outright swept them right up.
Yeah.
And they're roughly 50 employees are literally relocating
to New York City right now to integrate directly into Amazon's
logistics infrastructure.
And the absolute crown jewel of fauna robotics
is this piece of hardware called Sprout.
Sprout is a three and a half foot tall 50 pound bipedal
humanoid robot that retails for about $50,000.
And the marketing materials heavily, heavily
lean on the fact that it is approachable, which
is such a loaded word.
We need to strip away the consumer friendly marketing fluff
of the word approachable.
Oh, absolutely.
What we are looking at with Sprout
is the physical manifestation of the exact pivot we just
described with open AI.
Software models like open AI's spud cannot operate in a vac
here.
They need a body.
A digital brain sitting in a server farm
cannot fold your laundry.
And it certainly cannot pack a cardboard box
in an Amazon fulfillment center in Des Moines.
Software models now require physical vessels.
So Sprout isn't just a fun approachable robot buddy.
No, it is a highly localized edge device
for these massive new world simulation models.
So if I'm understanding the architecture here,
Sprout is to the spud model, what the original iPhone
was to the early internet.
That is a brilliant way to put it.
It's the physical hardware conduit
that allows the digital intelligence
to actually reach out and manipulate the physical environment.
That is the exact forensic reality of what we're looking at.
Amazon is not buying a robotics company
because they want to sell $50,000 toys to consumers.
They are buying the physical infrastructure of autonomy.
The hands and feet.
Exactly.
When open AI talks about automating the physical economy,
they are talking about a permanent shift away
from selling $20 a month software subscriptions
to digital copywriters.
Right, the chat GPT plus model is peanuts compared to this.
They are targeting the disruption
of multi trillion dollar physical labor
pools, logistics, warehousing,
elder care, construction, sanitation,
the entire physical supply chain.
If you can build a software model
that actually understands physics, true world simulation,
and you embed that model inside a $50,000
bipedal edge device like Sprout,
you fundamentally alter the unit economics
of global human labor.
When you lay it out like that, the math is just undeniable.
A $1 billion media deal with Disney to make movies
is a drop in the ocean compared to automating
the entire global logistics supply chain.
It's loose change.
The toy era is dead because the real money,
the existential money, is in physical labor.
But, and this is a big,
but while open AI and Amazon are racing
to automate the physical world,
another Titan is quietly attempting
to automate our digital chores.
Right, because the execution layer isn't just physical.
Exactly, because if AI is going to execute real world tasks,
if we are truly moving from generation to utility,
it needs a completely frictionless digital environment
to execute those actions, which brings us to Apple.
And what we are calling the Apple execution layer.
Yes, this is where we see the end of the user interface
as we have known it for 20 years.
This is where the concept of execution
becomes highly, highly intimate.
We are moving from the warehouse floor directly
into the secure enclave of your pocket.
Right, so according to Bloomberg's Mark Gurman,
Apple is debuting a standalone Siri app
and a completely overhauled Ask Siri Chatbot experience
in iOS 27 this coming June at WWDC.
Finally, Siri has been a punchline for a decade.
It really has.
Historically, Siri has been an OS level voice trigger,
you know, set a timer for 10 minutes.
But for the first time, Siri gets its own dedicated application,
a redesigned canvas where you can type,
speak, or upload context.
But here is where the architecture gets genuinely
paradigm shifting.
This is the scary part.
Siri will now natively read across your iMessages,
your emails, and your Apple notes
to build real-time context.
And crucially, it will be capable of executing actions
inside third-party apps directly.
That last sentence, executing actions inside third-party apps
is the most destructive and reconstructive shift
in mobile computing since Steve Jobs launched the app store itself.
Let me throw an analogy at you
because I want to make sure I'm fully grasping
the system's level magnitude of this shift.
Oh, for it.
We can think of the old Siri, the one we separate with for a decade,
as a very basic, somewhat incompetent librarian.
Highly incompetent.
You walk up, you ask a question, and the librarian
just points to a massive shelf of encyclopedias,
hands you a piece of paper with some Google search results,
and basically says, here, you figure it out.
Right, it just routes you to information,
doesn't do the work.
Exactly.
But this new iOS 27 Siri,
reading across iMessages and going inside apps,
this sounds less like a librarian,
and more like someone to whom you have granted
full, legally binding power of attorney.
Wow.
Yes.
They have the master keys to your filing cabinet,
they have your passwords, and they have the authority
to sign contracts and move capital on your behalf.
The power of attorney analogy is highly accurate,
and it highlights exactly why this architectural shift
is so incredibly difficult to engineer securely.
Right, because if it messes up, it's catastrophic.
To understand the why, we have to analyze
the fundamental limitations of conversational AI,
which is what we've been stuck with,
versus what we call dynamic execution environments.
Dynamic execution environments.
OK.
In the legacy paradigm, an AI was structurally blind.
It lived inside a single sandboxed application.
If you opened your Chase banking app,
the local AI inside that app had absolutely no idea
that you just texted your spouse about paying a plumbing
invoice.
Right, the apps can't talk to each other.
Because of sandboxing.
But context is everything.
An AI cannot act intelligently.
It cannot execute a multi-step, complex task
if it only sees one siloed encrypted piece of data
at a time.
Reading across the secure databases of iMessages, emails,
and notes provides the contextual connective tissue
required for autonomous execution.
OK, let me walk through a real-world edge case
to see how this actually functions.
Let's say I get an email from my boss
with a flight itinerary.
OK.
Then I get a text message from my wife
asking what time I land so she can pick me up.
And I have a lock app will note with my frequent flyer
log in.
Are you saying Siri aggregates all three completely
different data formats, parsing the natural language,
extracting the payload to understand
the total state of my life in that exact millisecond?
Yes.
It reads the email, checks the text, unlocks the note,
and synthesizes the reality of that moment.
But the aggregation of context is only step one.
What's step two?
Step two is the actual execution of the task.
And this is where we have to ask the hard engineering
questions.
How does Apple actually do that securely?
Right, because are they forcing every single developer
on the app store to build custom deep-blink APIs,
specifically for Siri to plug into?
Or are they literally using visual UI scraping?
Like where the AI just looks at the screen
and clicks invisible graphical buttons?
That's the million dollar question.
Because visual scraping is incredibly brittle.
Right, if it's a power of attorney reading a text
and wiring money in Chase, how does Chase know it's me
authorizing the wire and not some random hallucination
from the model clicking buttons?
Apple is utilizing a deeply enhanced evolution
of the intense framework paired with semantic OS level hooks.
They aren't just scraping pixels.
That's too fragile.
Because if the bank updates their app
and move the send button a quarter inch to the left,
a visual scraper breaks.
Exactly, it would click on empty space.
Instead, developers are forced to expose standardized semantic
actions directly to the OS kernel.
Apple securely passes a cryptographically signed token
authorizing the action on behalf of the user.
So it bypasses the visual interface entirely?
Completely.
But let's look at the other major piece of news
in this ecosystem today, because it
serves as a perfect parallel technical example
of exactly this kind of execution.
You're talking about Figma.
Yes.
Figma, the massive collaborative design platform,
has just completely opened its design canvas
to coding agents.
This was huge news in the design world.
They are specifically letting AI tools,
like Anthropics Quad Code, create and edit vector designs
directly on the canvas, using a specific team's existing UI
components and brand standards.
So we are talking about a fundamental shift
in how work gets done.
Meaning, Quad isn't just sitting in a secondary chat window
saying, hey, based on your prompts,
you should probably make that check out
button a darker shade of blue.
Right.
It's not giving you advice.
It is actually querying the company's official design
system, finding the exact authorized hex code,
writing the necessary React or CSS wrapper,
and physically altering the vector canvas itself.
Precisely.
Quad Code inside Figma is manipulating the canvas directly.
It is no longer an advisory chatbot.
It is executing code to change the environment.
Now, I want you to map that exact same logic back
to Apple and the iOS 27 rollout.
OK, mapping go back.
If Siri executes actions inside third-party apps,
directly via these OS hooks, what
happens to the graphical user interface of the app itself?
Oh, wait, think about it.
If I just tell Siri, pay the plumber $200 for today's visit.
And Siri parses the invoice in my Apple mail,
cross-references the plumber's LLC details in my contacts,
and then reaches deep into my banking apps
back in to process the ACH transfer.
I never actually opened the banking app.
No, you didn't.
I never saw their logo.
I never saw their interface.
You never see the interface.
You never see the bank's marketing copy.
You never see the cross-sell advertisement
for a new credit card they desperately want you to click.
The app basically vanishes.
The third-party application becomes nothing
more than a dumb, invisible data
pipe for Siri to route payloads through.
This fundamentally and abundantly
shifts the entire economic power dynamic of the mobile ecosystem.
For 15 years, app developers owned the user experience.
They meticulously designed the buttons, the onboarding flow,
the engagement loops, and the monetization funnels.
And now Apple just cooked them out completely.
Apple is reclaiming absolute control of the top layer.
The OS layer Siri becomes the only interface
the user ever interacts with.
The death of the traditional UI is the birth
of the OS level execution monopoly.
That is staggering.
I mean, it makes the actual hardware device incredibly
powerful and frictionalist for you and me as end users, right?
But it completely commoditizes every single software
developer on the planet.
They are reduced to invisible backend databases.
But this brings up a massive glaring technical vulnerability.
If agents like Siri or Claude inside Figma
are dynamically executing multi-step tasks across apps
and manipulating vectors, they aren't just
clicking invisible buttons.
Under the hood, they need to generate and run custom code
to bridge these massive software gaps.
And that is a terrifying part.
Because historically, letting an autonomous, unpredictable AI agent
write and run code directly on your internal network
is a security nightmare of epic proportions.
It is the architectural equivalent of inviting a stranger
into your secure server room, handing them root access
in a blowtorch, and hoping they only
use it to solder a broken wire.
Exactly.
And this is where the sheer ambition of autonomous infrastructure
violently collides with enterprise reality.
Let's look at the critical intelligence
at a cloud flair today.
This is a huge piece of the puzzle.
They have just launched a brand new compute architecture
called Dynamic Workers.
This tooling allows AI agents to rapidly and safely run
custom code that they write entirely on the fly.
And it does so at roughly 100 times the speed of legacy option.
100 times faster.
The vital technical detail here is that this system spins up
an isolated code environment, executes the AI's custom payload,
and then utterly cryptographically destroys that environment
in a matter of milliseconds, all without ever exposing
the underlying host network to the AI.
We need to perform a deep forensic breakdown
of why this specific piece of infrastructure
is the absolute linchpin for the entire utility pivot.
Breakdown.
As we established, we are moving from AI
that generates text to AI that executes tasks.
But tasks in the digital world require bridging legacy systems.
If an AI agent encounters a data format,
it doesn't recognize, say, it needs
to convert a weird proprietary legacy database file
from a 1990s mainframe into a modern standardized JSON format
to complete a supply chain task.
Which happens all the time in enterprise IT.
Constantly, it can't just throw its hands up and fail.
It has to dynamically write a custom Python or JavaScript
script to parse that specific file, run the script,
and extract the payload.
But if you let an AI run a dynamically generated,
completely un-vetted Python script on your main corporate
server, you've opened a fatal vulnerability.
Fatal.
The AI might hallucinate and accidentally
write a script that wipes the database.
Or worse, a malicious actor might prompt and check the AI,
tricking it into writing a script that
silently exfiltrates your customer data,
or establishes a reverse shell.
Right.
So the old analogy for this was the incredibly fast chef
in the exploding kitchen.
I love this analogy.
To keep the restaurant safe, you
build them a hermetically sealed kitchen,
let them cook one egg, and then instantly detonate the room
so he can't burn the building down.
But let's be technically accurate here,
because I really want to understand the mechanics
of how cloud flares doing this.
It's not just a new kitchen.
If an AI agent is writing a Python script to parse a database,
and we're talking about running that in under five milliseconds,
we certainly aren't spinning up a full virtual machine.
No, absolutely not.
Or even a standard Docker container,
because the coal start overhead alone for a container
would take seconds.
Exactly right.
You cannot use traditional hypervisors
or heavy containerization for this.
It's too slow.
Cloudflowers dynamic workers are utilizing V8 isolates.
V8 isolates?
What exactly is an isolate?
An isolate is a wildly lightweight construct.
It allows you to run thousands of completely separate sandboxed
execution environments inside a single operating system
process.
We inside a single OS process.
Yes.
Each isolate has its own isolated memory heap
and its own garbage collector.
The environment is born.
It executes the payload within an incredibly strict memory
and CPU time limit.
And then the context is cryptographically wiped.
So it's not even a separate machine.
No, it's not just an exploding kitchen.
It is a kitchen built in a temporary, separate,
dimensional space, where the physics of the environment
literally do not allow a fire to spread back
to the main host operating system.
That is wild.
This is a femoral compute in its most extreme form.
The nanosecond, the task is complete.
The attack surface ceases to exist.
If you don't have this femoral compute infrastructure,
if you can't sandbox the execution down to the millisecond,
you simply cannot deploy agentic AI safely.
You just can't do it.
And this reality seamlessly connects
to the next piece of intelligence, which
is frankly alarming for anyone working in corporate IT
right now.
The DeepView report.
Yes.
In a new episode of the DeepView podcast,
Salesforce's SVP of Enterprise IT strategy,
Shibani Yahuja dropped a massive diagnostic bombshell.
A total bombshell.
Over the past year, she conducted deep strategy sessions
with 587 C-suite leaders, the overwhelming consensus.
The enterprise sector is utterly terrified of agentic AI.
Terrified is the word.
She noted that while organizations relying
on simple chat bots are falling dangerously behind the curve,
the organizations that are actually
trying to deploy these autonomous agents
are hitting massive, intractable security blockers.
And let's be clear, those 587 C-suite leaders
are entirely justified in their terror.
Oh, 100%.
A C-suite leader, particularly a chief information security
officer, has one primary job risk mitigation.
A simple conversational chat bot is safe.
It's essentially an interactive, highly-articulate FAQ
document.
Right.
It reads a prompt and it outputs a text string.
Exactly.
It has no digital hands.
It cannot query the production database.
It cannot issue an API call to wire money.
It cannot alter the code base.
It is fundamentally an observer.
But an autonomous agent is an active participant.
It is a black box of dynamic execution.
You give it a high-level goal-like,
optimize our shipping logistics for Q3 across all European
ports.
And the agent economically decides how to achieve it.
It figures out the steps on its own.
It writes the code.
It contacts the vendor APIs.
It alters the shipping routes.
For a CISO, deploying an agent
without an ephemeral execution environment like Cloudflare's
V8 isolates is tantamount to professional suicide.
The black box of execution.
You don't know exactly what steps it took until it's already
done.
And by then, the damage could be catastrophic.
Catastrophic and irreversible.
And we aren't just talking about corporate networks
accidentally deleting themselves because of a bad script.
We were talking about the integrity
of the entire global information ecosystem.
This scales up to the geopolitical level very quickly.
Because the final piece of intelligence
in this section comes from Gartner,
who just named a company called Blackbird,
as the top vendor globally in what they call disinformation
narrative intelligence.
Disinformation narrative intelligence.
And the reason this sector is exploding
is because agent AI is supercharging
the speed and scale of malicious content.
This is the shadow side of dynamic execution.
We talked about how Cloudflare's workers allow an AI
to safely write code to solve a database problem.
Well, if an AI agent can spin up code in milliseconds
to solve a supply chain issue, a malicious unaligned AI agent
can spin up code in milliseconds
to architect a massive cyber attack.
It's the exact same technology.
Exactly.
An agent can autonomously register 10,000 fake news domains,
write custom HTML and CSS for each site
to make them look legitimate, populate them
with contextually accurate, deep fake videos,
generate millions of synthetic social media accounts
with unique backstories, and coordinate
a synchronized, highly targeted narrative attack
on a competitor's stock price or a sovereign national election.
All without a single human lifting of finger.
Security isn't just about protecting
the perimeter of your servers anymore.
It's about protecting the narrative reality
that society and financial markets operate within.
That is terrifying.
The scale of agentic disinformation
is completely beyond the capacity of human moderation.
You must deploy autonomous defensive AI
simply to survive the autonomous offensive AI,
which makes the stakes for enterprise security
higher than they have ever been in the history
of digital infrastructure.
For decision-makers who need to understand
the second order effects of these shifts,
JamGamine provides human verified technical grade audio
forensics in healthcare, energy, and finance.
Visit djamgmind.com.
That transition regarding enterprise risk is vital
because the sheer terror of software vulnerabilities,
the exact existential dread
of those 587 C-suite leaders are feeling about their networks
is driving a massive physical reaction across the globe.
A physical reaction?
Yes.
We've talked extensively about software
sandboxing and ephemeral environments,
but to truly security enterprise,
to guarantee beyond a shadow of a doubt
that your proprietary data and your execution layer
cannot be compromised.
Corporations and sovereign nations
are doing something drastic.
What are they doing?
They are violently ripping their AI infrastructure
out of the public cloud, and they
are hoarding the physical hardware,
which brings us to the critical concept
of hardware sovereignty and the great pullback.
Let's examine the pharmaceutical giant, Roch.
This is a perfect example.
They have just announced the launch
of a massive new AI factory, specifically designed
to accelerate drug discovery and complex biologics
manufacturing.
But here is the massive red flag.
They aren't renting time on Amazon web
services or Microsoft Azure.
No, they are not.
They aren't spinning up cloud instances.
Roch is physically amassing over 3,500 highly coveted
Nvidia Blackwell GPUs completely on premise.
They are building a vertically integrated,
heavily guarded physical AI factory.
We have to pause and deeply appreciate
the magnitude of this capital expenditure,
because it represents a complete reversal
of a decade of tech orthodoxy.
A total 180.
For the last 10 years, the absolute,
unquestioned gospel of the technology industry
has been move everything to the cloud.
Right, don't own the metal.
You don't buy physical servers.
You rent them.
You scale compute up and down elastically on demand.
It was considered gross corporate malpractice
to tie up capital building your own private data center.
It was seen as archaic.
And now, one of the most sophisticated pharmaceutical companies
on earth is spending hundreds of millions of dollars
to physically own the metal.
It is deeply ironic.
We are essentially witnessing the death of the public cloud
for frontier intellectual property.
Roche is basically building a 1990s
on premise server room, just on unimaginable AI-driven steroids.
That's exactly what it is.
But why?
Explain the technical necessity to me.
Why is the enterprise suddenly pulling their compute back
in a house?
What is driving this great pullback?
It is the absolute necessity of hardware sovereignty.
When Roche uses an advanced model thing
alpha-fold on steroids to discover a new proprietary drug,
they are dealing with a molecule that could literally
be worth tens of billions of dollars
in exclusive patent revenue.
The steaks are astronomical.
The computational process involves feeding
their most deeply guarded, highly classified biological data
into the model.
If they execute that inference on a public cloud,
they face two unacceptable technical risks.
OK, what's the first one?
First, latency.
If the AI is analyzing molecular folding in real time
or controlling physical robotics in a biologics manufacturing
plant, a 50 millisecond network delay routing
data to an AWS server halfway across the country
could ruin an entire batch of medicine.
Because the physics won't wait for the network to catch up.
Exactly.
But the second risk is far more critical,
and that is data leakage and side-channel attacks.
Right, because when you are on the cloud,
you are on a multi-tenant architecture.
You are sharing the physical silicon with other companies.
Exactly.
Even with the best software encryption,
public clouds rely on hypervisors to separate tenants.
But as we've seen with vulnerabilities
like specter and meltdown.
Come on, specter meltdown.
Side-channel attacks can actually
observe the electrical fluctuations or cache
timing of the CPU to extract data
from a neighboring virtual machine.
Wait, so someone on the same physical server
can basically read your data just by measuring the heat
or the timing of the chip?
Yes.
You cannot risk your trillion dollar intellectual property
traversing public, fiber optic, urnet pipes
or sitting on a multi-tenant cloud server
where a zero-day hypervisor vulnerability
could expose it to a state-sponsored hacker.
It's just too vulnerable.
You need absolute sovereign control.
You need a physical fortress where you own every single inch
of the wire from the GPU to the cooling system.
Hardware sovereignty, the physical fortress,
and that paranoid fortress mentality
extends all the way down to the literal pipes
and switches that connect us, which
explains the most aggressive geopolitical maneuver
in our stack of intelligence today.
The router ban?
Yes.
TechPresse reports that the FCC has officially
banned the important sale of all new foreign-made network
routers in the United States.
And the mandate specifically explicitly states
that it is designed to quote, secure the lowest level
of the hardware stack against foreign espionage
as AI inference moves to the edge.
This is a profound strategic recognition
by the federal government of exactly how
AI architecture is fundamentally changing.
Because it's not just in the data centers anymore.
Historically, AI lived in those massive centralized data
centers.
But as we just discussed with the Amazon Sprout robot
and Apple's new Siri execution layer, AI inference,
the actual thinking and execution of the trained model
is rapidly moving to the edge.
It's in the ring with us.
It is happening locally inside the robot, inside the mobile
phone, inside the local manufacturing facility.
When inference moves to the edge,
the mundane network router sitting in your office closet
or on your warehouse floor suddenly
becomes the absolute critical front door
to your autonomous AI.
Let me make sure I'm mapping this threat correctly.
If you're a listener sitting on your home or office
Wi-Fi right now, consider what happens when your local router
is the one routing the intelligence, not a distant AWS server.
It changes the attack vector completely.
If Siri or the Sprout robot needs
to quickly pull context from a local secure database
to execute a physical or digital task,
that secure data flows straight through that local router.
Exactly.
If a foreign intelligence service compromises
the firmware of that physical router,
they don't just passively steal your data.
What else can they do?
They can actively inject malicious context
into the data stream.
They can compromise the AI agent's dynamic execution environment.
Oh wow.
If the router lies to the AI about the contents of the database,
the AI will confidently execute a catastrophic flawed task
based on that poisoned context.
Because the AI trusts the network.
The FCC realizes that advanced software encryption
is completely useless.
If the actual physical silicon routing the packets
is compromised by a nation-state adversary,
hardware sovereignty doesn't just mean owning the GPUs,
it means controlling the silicon stack
from the absolute ground up.
Which ties perfectly into the next major
tectonic hardware shift we are tracking.
ARM Holdings, the foundational architecture company
whose chip designs literally powered 99%
of the smartphones on the planet,
has just released its very first in-house
custom manufactured chip.
This is a massive shift for ARM.
They are calling it the ARM AGI CPU.
For nearly 36 years, ARM only licensed its architecture
designs to companies like Apple, Qualcomm, and Nvidia.
They drew the blueprints and they let other people
build the houses.
Right, they were just an IP company.
But now they are physically manufacturing production
ready processors built specifically
from the ground up for running heavy AI inference
in data centers.
And Meta has signed on as their massive debut customer.
It is a historic violent pivot that
underscores the sheer desperation across the industry
for viable silicon alternatives.
For 36 years, ARM's entire business model,
their core identity, was strict neutrality.
They were the Switzerland of semiconductors.
Now they are dropping the licensing model
and entering the physical battlefield directly.
Why?
Because the bottleneck for the pivot to utility
is as off-wear anymore its chips.
Everyone needs chips.
Companies like Meta are burning tens of billions of dollars
annually, buying Nvidia's H100 and Blackwell GPUs.
By partnering with ARM to build the AGI CPU,
which leverages their incredibly efficient
Neoverse architecture, Meta is trying
to engineer a massive bypass around the Nvidia monopoly.
They want out.
Neoverse is specifically designed
to handle the massive memory bandwidth
required for AI inference much more efficiently
than the legacy by 86 architectures.
Meta is attempting to achieve total sovereignty
over their own compute costs.
Everyone is desperately trying to escape dependency.
Roche is spending billions to escape the public cloud.
The United States government is banning hardware
to escape foreign routers.
Meta is partnering with ARM to escape
the crushing margin of the Nvidia monopoly.
And Microsoft.
Microsoft is executing highly aggressive maneuvers
to escape open AI.
The great uncoupling.
That is the final piece of intelligence
in this hardware and infrastructure section.
Microsoft AI just aggressively poached
three elite top tier researchers
from the Allen Institute for AI,
including their former CEO, Ali Farhadi.
Ali Farhadi is a legend in the space.
They're joining Mustafa Suleiman's
newly formed super intelligence team,
directly inside Microsoft,
to build in-house frontier models.
Let's look at the strategic irony here.
Microsoft invested well over $10 billion
into open AI.
They physically tied their entire corporate identity,
their Bing search engine, their co-pilot suite,
to open AI's models.
They basically outsource their brain to Sam Altman.
But strategically, when you reach the scale
of the utility pivot, you simply cannot outsource
the core cognitive engine of your company's entire future
to a volatile third party that you do not fully control.
It's too risky.
Poaching elite foundational talent like Ali Farhadi
is Microsoft loudly signaling to the market
that they are building their own sovereign intelligence layer.
From the physical silicon to the algorithmic researchers,
the brutal mandate across the entire industry
is crystal clear.
Absolutely no one wants to rely on anyone else anymore.
You either control the entire stack
or you will inevitably be commoditized
by the entity that does.
It is a ruthless zero sum landscape.
But as we transition into the final phase of our analysis,
we have to look at what happens when
these sovereign, hypercapable, physically embodied AI agents
actually collide with the real world.
This is where it gets incredibly messy.
Because as AI moves into physical bipedal robots,
as corporations lock down their own impeditable hardware
fortresses and as agents take autonomous actions deep inside
our banking apps, the legal and political systems
of the world are suddenly scrambling
to write a brand new rule book for a high stakes game
that has already begun.
And the rule book they are currently drafting
is fundamentally and irrevocably altering
the legal liability of software code.
Let's start with a massive paradigm
shifting legal precedent that just dropped out of New Mexico.
A jury there just found metal libel on every single count
in a massive child safety and algorithmic arm case,
ordering the company to pay $375 million
for endangering children and actively
concealing sexual exploitation networks
on Instagram and Facebook.
A historic verdict.
Now obviously tech companies get sued all the time.
But the specific tactical way they got sued here
is what matters to our infrastructure analysis.
Attorney General Rao Torres framed this case,
not around speech and not around content moderation,
but strictly as a product's liability claim.
Products liability, that is the magic word.
He successfully argued to the jury
that the algorithms themselves, the engagement loops,
the platforms were physically and inherently defective.
This brilliant legal strategy,
completely sidesteps section 230 protections,
which normally provide absolute shielding for tech companies
regarding user generated content.
And now 40 plus stated attorneys general
have a proven tested courtroom playbook
to attack the tech giants.
Now, looking at this lawsuit and looking
at the White House tech council we were about to discuss,
I want to pause here for you, the listener.
Because as we analyze these political appointments
and these massive legal battles,
we are keeping our strictly analytical hats firmly on.
We absolutely do not endorse political figures.
We aren't taking sides in these highly charged legal battles,
and we certainly aren't endorsing any
administration's platform.
Our mandate today is solely to look at the board forensically
and analyze the technical and infrastructure implications
of these moves based purely on the original source material.
Precisely.
And purely from an infrastructure and legal architecture
standpoint, let us analyze the sheer brilliance
and the absolute terror of attorney general
towards his legal strategy.
Why is it so brilliant?
Well, section 230 of the Communications Decency Act
was written in the mid 1990s.
It essentially codified that a website
is merely a distributor, not the publisher,
of the information provided by its users.
It treats a social media platform exactly
like a digital bulletin board.
Yes.
If a malicious actor pins a defamatory or illegal flyer
on a physical court board, you don't
sue the manufacturer of the court board.
You sue the person who pinned the flyer.
That single law fundamentally built the modern internet.
It protected tech companies from endless paralyzing
litigation over user speech.
Right.
The tours didn't attack the flyers.
He attacked the structural integrity of the court board
itself.
I look at this New Mexico ruling,
and it raises a massive forward-looking question
for everything we've talked about today.
If a social media platform can now be legally classified
by a jury as a defective product, exactly like a lawnmower
with a faulty blade that flies off an injures a consumer,
or a car with an exploding airbag, what
happens when we introduce highly autonomous AI agents
into this exact legal framework?
You're asking the multi-trillion dollar question.
Are we shifting from the regulation of speech
to the strict regulation of digital products?
We absolutely are.
And that is precisely why this ruling ascending
shockwaves through the deployment schedules of any company
building agent at AI.
Because it changes the entire risk calculus.
Let's tightly connect this legal precedent back
to everything we've broken down today.
We analyzed Apple's Siri executing
financial transactions across apps.
We broke down Sprout, the physical bipedal robot
operating in warehouses.
We analyzed cloud flare agents dynamically writing code.
Right.
All autonomous actions.
If an AI agent is no longer just generating a text response,
which is classified as speech, but is actively
executing physical or digital actions,
it constitutes a product feature.
It completely and immediately
leaves the protective umbrella of section 230.
The umbrella is gone.
If Siri mistakenly wires your life savings
to an offshore scammer because it hallucinated
the context of an eye message, or if the Sprout robot drops
a 50 pound box on a toddler because its world simulation
model suffered a fractional drop in frame rate,
that is no longer a speech issue.
No, that is a defective product causing
catastrophic financial or physical harm.
Under this exact new New Mexico playbook,
the liability for the tech company deploying that agent
is absolute and it is entirely undefendable
under legacy internet laws.
The liability is absolute.
If you build an autonomous agent,
you are legally and financially responsible
for every single micro action it takes,
which explains exactly why the leaders
of these massive infrastructure companies are desperately
maneuvering to get inside the actual room
where the new federal laws are being written.
They have to be in the room.
Let's look at the next major development.
President Trump has officially appointed
a massive slew of tech CEOs, including leadership from meta,
Nvidia, Dell, Oracle, and AMD,
along with Google co-founder Sergei Brinn
and prominent venture capitalist Mark Andreessen,
to a newly formed White House Science
and Technology Advisory Council.
That is a heavy roster.
Very heavy.
The council is being heavily co-chaired by White House AI
and cryptos are David Sachs and Michael Kratzios.
And the reporting explicitly notes
that several of these leaders have direct,
massive financial ties to the administration.
Again, maintaining our strictly analytical forensic lens,
I want you to look closely at the specific composition
of that council roster.
Look at the names.
Nvidia, Dell, Oracle, AMD.
A hardware giants.
What do these specific companies have in common?
They are the absolute titans of the exact hardware
sovereignty movement we just discussed.
They manufacture the chips.
They build the server racks.
They own the data centers.
They provide the physical infrastructure
of the AI revolution.
They aren't the software guys writing chatbots.
They are the heavy metal guys building the fortresses.
Exactly.
The architects of hardware sovereignty
are now physically sitting inside the White House,
holding the pen, drafting the national regulatory strategy
for technology.
The entanglement between sovereign state power
and frontier AI infrastructure development
is now absolute and unbreakable.
That's a complete merger.
They recognize that AI is no longer a consumer software
product to be managed by the App Store.
It is critical national infrastructure.
It is the new enriched uranium.
And when you are dealing with the technology
that requires massive allocations
of the national energy grid, absolute control
over a global semiconductor supply chains,
and fundamentally threatens the stability
of global labor markets, the private sector
simply cannot operate an isolation from the sovereign state.
They must permanently merge their strategic interests
to survive.
Which brings us to the final fascinating piece
of the puzzle today.
The tech giants know exactly what is coming.
They see the writing on the wall.
They know that the societal disruption
we are talking about, massive job displacement
from bipedal robots like Sprout, the catastrophic legal
liabilities from defective agents
destroying enterprise networks, the absolute need
for national security integration.
They know this disruption is imminent.
And they are trying to financially
self-insure against the inevitable public backlash.
How so?
Well, the OAI Foundation has just committed
a staggering $1 billion this year
across disease research, job displacement initiatives,
and advanced AI safety protocols.
And they have officially brought on open AI co-founder Wozik Zaremba
as the head of AI resilience.
AI resilience.
I want to highlight that as a very carefully
deliberately chosen phrase.
Why resilience?
When a technology is engineered to create
trillions of dollars in market value
by explicitly automating the physical and digital economy,
it inherently and inevitably displaces the human labor
that previously powered that exact economy.
A $1 billion foundation fund dedicated to job displacement
and algorithmic safety is not just corporate philanthropy.
It's not a charity drive.
It is a highly strategic geopolitical buffer.
It is a stark acknowledgement from the architects
of the technology that the brutal transition
from generative toys to high stakes utility execution
is going to be incredibly turbulent for society at large.
Yeah, you can't just replace millions of jobs
without some turbulence.
We are actively trying to fund societal resilience
so that the inevitable massive regulatory
and public backlash doesn't legally suffocate
the development of the technology itself.
So let's pull all of this together
and synthesize this incredible high stakes journey
we've been on today.
It's been a lot to cover.
We started by looking at a billion dollar magic trick
being canceled.
In just one single day of intelligence gathering,
we watched the official death of generative toys,
the execution of Sora,
because the industry realized that auto regressive pixel
guessing is a massive compute drag
compared to the real multi trillion dollar prize
of true world simulation and physical robotics.
The pivot to utility.
We witnessed the birth of autonomous execution inside
our mobile devices, where Apple's iOS 27
is turning third party apps into invisible commoditized pipes,
effectively ending the era of the graphical user interface.
We saw 587 C-suite executives in a state of sheer panic
over the security black boxes of agentic AI,
forcing the creation of incredibly complex,
ephemeral, millisecond exploding code environments
just to keep corporate networks from being compromised.
We called to protect the execution layer.
We watched the global enterprise sector
and geopolitical superpowers aggressively rip
their intelligence out of the public cloud
to build vertically integrated, heavily guarded,
sovereign hardware fortresses.
And finally, we saw the legal system successfully hack
legacy 1990s internet laws to hold these autonomous products
strictly liable, while the Titans of Silicon moved
directly into the White House to hold the pen
on the future of national infrastructure.
It is a profound, entirely irreversible
architectural shift in how our world operates.
And I want to leave you our listener
with a final, highly provocative paradox
to mull over as you watch this play out.
Give it to us.
We talked extensively today about hardware sovereignty.
We talked about roach spending billions
to build an isolated AI factory with 3,500 GPUs,
entirely disconnected from the multi-tenant public cloud
to fiercely protect its proprietary data.
And we talked about Amazon building its own physical models
to power robotic logistics.
Right.
Three major enterprise and every sovereign nation in the world
successfully builds its own isolated, paranoid,
completely sovereign AI fortress
to protect against the dynamic execution
threats we discussed.
How will these digital fortresses ever
interact with one another?
Right.
Because ultimately, the global economy requires friction.
Rose still has to legally and financially do business
with Amazon supply chain.
Exactly.
What happens in exactly three years
when Rose's highly sovereign, heavily fortified AI agent
needs to rapidly negotiate a complex multi-billion dollar
pharmaceutical supply chain contract
with Amazon's highly sovereign, heavily fortified AI agent.
But neither AI is legally or architecturally
allowed to expose its underlying host network,
its context or its code to the other.
A total stalemate.
We are currently spending hundreds of billions of dollars
across the globe building brilliant, incredibly powerful,
heavily armed, and totally isolated digital minds.
But we haven't even begun to figure out
the basic cryptographic protocols
for how these paranoid fortresses will safely
talk to each other in the dark.
We took the illusionist off the stage
to build the suspension bridge.
But we forced him to build it inside a locked subterranean vault.
It is a fascinating and frankly terrifying paradox
to leave you with.
Subscribe to AI Unraveled on Apple Podcasts
to get this daily intelligence completely ads free.
That concludes our rundown for March 25.
The signal for today is the pivot to utility.
The market is no longer paying for AI
that makes pretty pictures.
It is paying for AI that executes code, discovers drugs,
and controls the desktop.
This episode was made possible by JamGamined.
For human-verified technical grade forensics,
visit JamGamined.com.
And don't forget to hit subscribe on Apple Podcasts
to get your daily news completely ad free.
Until tomorrow, keep unraveling the future.
And before you go, if your company is building the tools
that power the workflows we talked about today,
I'd love to showcase them to this audience.
We don't just run ads.
We build technical simulations that prove your value.
Let's build something together.
Visit JamGamined.com slash partners to get started.
Until next time, keep building.
Hey, it's Bobo Wallace from 2311 Racing.
You know what feels like forever?
Sitting on a plane waiting for takeoff.
Good thing I've got Jumbo Casino.
With daily boost in social casino games on tap,
this is a kind of fun that makes time fly.
Why not turbocharge your downtime?
Play now at JumboCasino.com.
Let's Jumbo.
Sponsored by Jumbo Casino.
No purchase necessary, VGW GroupFord,
where prohibited by law, 21 plus terms and conditions apply.
Hey, it's Cole Swindell.
After I give everything I've got to land a perfect vocal,
I usually take five before jumping into the next track.
And I've learned exactly how to recharge in that time.
Some folks grab coffee.
I hit a quick good look spin.
Next thing you know, the break is just as fun as land down the track.
A better break makes for a better take.
Need a break?
Let's Jumbo.
No purchase necessary, VGW GroupFord,
where prohibited by law, 21 plus TNC supply.
Sponsored by Jumbo Casino.
Hello, it is Ryan, and I was on a flight the other day
playing one of my favorite social spin slot games
on JumboCasino.com.
I looked over the person sitting next to me.
You know what they were doing?
They were also playing Jumbo Casino.
Everybody's loving having fun with it.
Jumbo Casino's home to hundreds of Casino style games
that you can play for free anytime, anywhere.
So sign up now at JumboCasino.com
to claim your free welcome bonus.
It's JumboCasino.com and live the Jumbo life.
Sponsored by Jumbo Casino.
No purchase necessary, VGW GroupFord,
where prohibited by law, 21 plus terms and conditions apply.
Hey, it's Bubba Wallace from 2311 Racing.
You know what it feels like forever?
Sitting on a plane waiting for takeoff.
Good thing I've got Jumbo Casino
with daily boost in social casino games on tap.
This is a kind of fun that makes time fly.
Why not turbo charge a downtime?
Play now at JumboCasino.com.
Let's Jumbo.
Sponsored by Jumbo Casino.
No purchase necessary, VGW GroupFord,
where prohibited by law, 21 plus terms and conditions apply.
Tyler Reddick here from 2311 Racing.
Another checkered flag for the books.
Time to celebrate with Jumbo.
Jump in at JumboCasino.com.
Let's Jumbo.
No purchase necessary, VGW GroupFord,
where prohibited by law, CCNC, 21 plus.
Sponsored by Jumbo Casino.
Capital One's tech team isn't just talking about
multi-agentic AI.
They already deployed one.
It's called chat concierge and it's simplifying car shopping
using self-reflection and layered reasoning with live API checks.
It doesn't just help buyers find a car they love.
It helps schedule a test drive.
Get pre-approved for financing and estimate trading value.
Advanced, intuitive and deployed.
That's how they stack.
That's technology at Capital One.

AI Unraveled: Latest AI News & Trends, ChatGPT, Gemini, DeepSeek, Gen AI, LLMs, Agents, Ethics, Bias

AI Unraveled: Latest AI News & Trends, ChatGPT, Gemini, DeepSeek, Gen AI, LLMs, Agents, Ethics, Bias

AI Unraveled: Latest AI News & Trends, ChatGPT, Gemini, DeepSeek, Gen AI, LLMs, Agents, Ethics, Bias
