Loading...
Loading...

At NVIDIA's GCC conference, CEO Jensen Huang announced a bold target of one trillion dollars in orders by 2027.
The AI graphics breakthrough DLSS 5, which has been received with memes and controversy.
Other announcements include the groundbreaking Vera Rubin platform, which promises 35 times the performance; strategic acquisitions like Grok; and advancements in self-driving technology.
------
🌌 LIMITLESS HQ ⬇️
NEWSLETTER: https://limitlessft.substack.com/
FOLLOW ON X: https://x.com/LimitlessFT
SPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQ
APPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890
RSS FEED: https://limitlessft.substack.com/
------
POLYMARKET | #1 PREDICTION MARKET 🔮
https://bankless.cc/polymarket-podcast
------
TIMESTAMPS
0:00 NVIDIA's Trillion-Dollar Vision
1:23 DLSS 5
3:49 Vera Rubin
5:25 Breakthroughs with AI Chips
9:48 The Next Generation: Feynman
11:33 Full Self-Driving Revolution
14:01 Robotics on Stage
16:45 OpenClaw and Enterprise Solutions
17:22 AI in Space
19:12 The DGX Spark Announcement
21:10 Closing Thoughts on NVIDIA's GTC
------
RESOURCES
Josh: https://x.com/JoshKale
Ejaaz: https://x.com/cryptopunk7213
------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures
And video just held its GCC conference in San Jose, where Jensen Huang walked on stage
in front of 30,000 people and opened with a number that's probably going to echo across
Wall Street for weeks a trillion dollars in expected orders through 2027.
That's double what he predicted just six months ago from that very same stage.
And then he spent the next two hours.
This was a very long presentation, unveiling why even a trillion dollars is conservative.
But a lot of people throughout this presentation seem to have missed the actual reveal.
I think they're focused on a few specific highlights.
When the reality is the things that he presented that are going to yield the trillion dollars
are probably much different than I think the average person expects.
As I know, we were chatting as we were watching this, this two hour movie marathon.
What were your thoughts?
Did you make it through?
Was it two boring?
Was it exciting?
What were the first impressions of this presentation?
The thing that excited me the most was the announcement of the DLSS-5,
which seems to be the most controversial announcement.
It's this new 3D rendering AI model that basically refactors old games or gaming graphics
into newer, higher performance graphics.
So if you're looking on the screen right now, you're seeing a version of a video game
and then suddenly it's enhanced.
It's kind of like a Snapchat filter, which I think a lot of the gaming community had
backlash about.
They thought it was just AI stopped it and really vibed with it.
But in my opinion, it's actually quite a good product and would make me more engaged
to play the actual game.
So it was surprising to see DLSS-5 get the attention that it did just because Nvidia
announced some unbelievable stuff and seemingly this was the headline at all the news outlets
and it's basically an AI upscaler for video games.
It takes existing graphics that are pretty decent and upscales them.
It makes the facial features better.
It increases the dynamic range, the highlights, the shadows.
And when I saw it, I loved it.
I was like, oh, this is pretty cool.
But the internet reaction to this was far from what mine was.
I mean, if you're looking at the video on screen, it increases the facial features.
If you're not looking at the meme on screen, that was the public perception.
It was all so negative for such a small feature that Jensen dropped in a two hour presentation.
So you just do you have any idea what's going on with this backlash here, particularly
around DLSS-5?
I think the point around the gaming community is they're just very sensitive around AI being
involved in art.
And I get it, right?
Like some things can be kind of cringe and doesn't seem very human.
But the point is like, this just makes graphics of games way, way better.
I mean, the meme you're showing on the screen isn't the accurate representation of what
this thing is going to do.
We had the head of Bethesda Games, like I'll make a partnership with Nvidia just for this
tour.
It's going to save him and his team hours and hours of work.
And I saw someone make a really good point online yesterday, which says, if you're a gaming
developer, that's spending years designing AAA games.
This not only saves you a bunch of time, but it also helps you realize your artistic vision.
Usually when you're a game developer, you make sacrifices when you are designing a particular
character or an asset, because you don't have enough money, compute, or the tools to be
able to do this.
This should just be seen as another tool to get to your actual vision.
So I think it's a good reason, but there's another reason why this is super cool, and
everyone missed it, in my opinion.
This is the exact same technology that you can use to create visual learning for robotics
and for automotive, sorry, autonomous driving cars.
So this is the same technology that Nvidia is using to build out their partner program
with, I think it was BYD and a bunch of other car companies, which we'll talk about in
a second, as well as being used in their robotics division with their group robotics models.
This is the same tech.
So I actually think it's cool that it's so pervasive and it's entering gaming, but that's not really
the big story for me here, whether you hate it or you like it, you're not going to be able
to use this thing for the mass audience until probably next year.
This thing runs on like two RTX 5090s, which are very expensive.
It's not very accessible to the average day to day person.
So by the time it gets released to the mass audience, I think it's going to be a lot
better than what we see today.
So that's the headliner.
If you are not paying attention to the headliner, there is a lot of other stuff that was announced
that is far more interesting than this, and we have a lot to pack on pack, so buckle up,
starting with the Vera Rubin platform, which is the big headliner.
I mean, this, this is the big boy.
This is what was teased previously six months ago.
I think when Jensen was announcing his like $500 billion in revenue, now he's up to
$500 billion.
He was unveiling a little bit more information about the chip, he jazzed what's new with
Vera Rubin.
Yeah.
So the headline metric is it's 35 times more performance than the previous generation.
So anyone who's been tracking typically a new Nvidia GPU gets you about a two to five
X performance upgrade on a good day.
This is the largest jump overall, and the secret is they are about five to seven major
components of a GPU, typically when you improve for the next generation of GPUs,
you just refactor one of those things.
Why?
Because if you did all of them at once, that's really high risk, anything could go wrong
and a result in delays of improving your GPUs.
Jensen said, forget, I'm just going to do it anyway, and he pulled it off.
Seven new chips make up this entire new thing, and it gets implemented into five new
racks, creating what he calls on stage an AI supercomputer, and that's why you get this
massive performance increase.
It's just insane.
These chips are what's running all the AI that we use every single day.
Previously, everyone was training on hopper.
Hopper was the chips that are running a lot of the AI models that you're actually using
today.
The frontier labs have just started to spin up the blackwell models.
That's what we've seen with Opus 4.6, what we've seen with GPT 5.4, that's the blackwell
chip.
It takes a long time from these chips to be invented to actually roll down to data centers
and then train the models, what we're seeing next, and we're not going to actually feel
the effects of this until probably early next year is Vera Rubin.
Vera Rubin, I mean, it's a 10 times performance improvement versus blackwell, just in terms
of performance per watt.
For every gigawatt of energy that these data centers have, this new chip is equivalent
to 10 gigawatts worth of compute today.
For every gigawatt, you get a 10x improvement on intelligence, and that is huge.
It is an absolutely massive growth because we're planning to scale the gigawatts of these
data factories, pretty significantly, by the end of the year, six to seven gigawatts
for some of these.
That's going to be equivalent to 60 to 70 gigawatts of intelligence as of today.
I think that's pretty important to note is that there is a strong delay when it comes
to these chips actually being released, actually being implemented on the racks, trained
and deployed.
It's hard to imagine we don't get AGI from this.
The other major improvement that they made was a few, I think it was about a month and
a half ago, Nvidia acquired, and I do this like this because apparently it wasn't a
formal acquisition, a company called GROC spelled GROQ, and the reason why they acquired
them is they get the rights to a very special type of AI chip called an LPU, which uses
something called SRAM, static random access memory.
Now if you've been keeping up to tabs with the memory, what was that happening right
now?
Memory prices have skyrocketed.
In fact, it's probably going to affect a bunch of major companies releasing their own
technology devices because the cost of memory is so high so they can't even give it to
the customers because otherwise they'll need to charge extortion at prices.
Jensen made a really smart move by acquiring this company and integrating that technology
into Vera Rubin.
What you're seeing on the screen now is basically the same architecture of Vera Rubin but integrated
with this SRAM technology.
The resulting effects is you can inference AI models at a much larger scale.
That 10X that you just mentioned, Josh, partially a bunch of that is unlocked by these new LPUs.
So we're now starting to see Jensen take two things more seriously.
One a different type of chip architecture, usually in videos known for generalized GPUs
and that's where the bread and butter is.
Now we see him branching off into these hyper-specific inference chips because he looks
over his shoulder and he sees not close but kind of far back Google's TPUs, looming AMD's
chips and Intel's CPUs and chips coming up behind him as well.
And they're all specializing in inference-specific chips and the argument or the reason behind
that is a lot of the world isn't going to be focused on training AI models.
It's going to be prompting and querying AI models and that's going to grow exponentially
more.
So this is Nvidia and Jensen basically saying we're going to make a mark here.
This is our stand.
This is why we acquired GROG and here's the chip that we're doing and Vera Rubin is going
to be that chip for anything and everything, general purpose and inference.
When I think about these chips and just project it out to the future, it's so exciting
because they're such a clear path to going to where I think every AI I want to go.
Yeah.
So getting to that AGI level and beyond and this chart that we're showing on screen here
is a beautiful example of this because in addition to Blackwell and addition to Rubin,
they also teased Feynman already even though Rubin is months to years away from actually
being deployed at scale.
So Nvidia is essentially 18 months give or take a few ahead of what the current reality
looks like.
And I think this is really important to know is currently with the bleeding edge of AI,
we're running Blackwell right now and we just started running Blackwell and Blackwell
has about 12 months of improvements to be made before we start to feel the effects of
Rubin.
By the time we feel the effects of Rubin, which is that 10x performance per watt improvement,
they already have Feynman ready to go and to be deployed into these data centers and
already we have two incremental step to exponential steps ahead of where we currently sit and it's
hard to imagine that with the build out that's happening with the performance per watt increase
that we're seeing from all these ships as that we're not just going to have this completely
vertical and exponential growth of AI across the board.
And I think that's probably at the core of Jensen's thesis of a trillion dollars is like
the spending isn't going to stop because he's already created the future.
It's just a matter of actually deploying it and plugging it into the grid so you could
power these chips and get the intelligence that everyone wants and it's unbelievable.
So Feynman is coming.
They didn't announce a bunch of things about Feynman, but that's the name of the next chip
architecture named after your favorite mathematicians, favorite mathematician, Richard Feynman.
Everyone's a big fan of him.
Very cool.
Very excited.
His book was something.
I did.
Yep.
I'm sure you're joking, Mr. Feynman.
He has a few books that are all awesome.
So if you're into physics or math or just really admire great teachers, Richard Feynman
is amazing and is now the naming architecture for the future of Nvidia's AI chips.
So cool stuff.
Bold name.
Big ambitions.
Nvidia currently sits at what, $4.5 trillion, biggest, most valuable company in the world.
All of that, it's the same by the end of the year.
Is this going to be prolonged?
Well, we can ask our friends at Polymarket to answer this for us.
And it looks like there has been a strong trend signaling, yes, and this was not always
the case.
I mean, it looks like alphabet Google was at one point during the year, February, just
a month ago, was projected to flip them.
People thought Google was going to be the world leader.
It is clear now that is absolutely not the case.
In fact, Apple, who we frequently talk about, looks like they have a better chance of doing
it than Google now.
And now Nvidia is up to 70%.
So it seems highly probable that people saw this presentation.
People have been seeing progress.
And they are very much bullish on Nvidia.
So the market is pricing in a pretty steep increase to the stock price before the end
of the month.
I mean, it's currently trading at $182.
And it looks like there's what, $25.30, it's about a 30% chance that it trades over $200.
This month, so it looks like things are looking good for Nvidia for being the most valuable
company in the world and also continuing to trade up on this news.
It was an incredible presentation.
Thank you to Polymarket for sponsoring this segment of the episode.
And now we could probably get into the next most interesting thing for me at least, which
was the full self-driving moment.
In fact, Jensen Huang said, this is the chat GPT moment for self-driving cars.
It has arrived.
This is a bold take because the full self-driving industry is pretty, pretty vicious.
Hasn't it been solved by Tesla already at this point?
Well, it depends who you ask.
It sounds like internally they feel confident in the fact they've solved it, but they're
currently solving this march of nine.
So where they have efficacy up to 99.X% and they need to get it to 9999.
Now, Waymo clearly has the most deployed version of this.
You could actually go and you could get into Waymo.
You can get into a cyber cab in some places in Austin, but they still have the kind of guiding
drivers they haven't figured out the legislation to let them be fully autonomous.
But Jensen is saying, hey, if you're not Waymo, if you're not Tesla, we have a solution
for you.
We are actually going to build the full self-driving stack and integrate it directly into your
cars for you from the sensors all the way to the software stack.
And they just recently partnered with BYD, Nissan, Hyundai, and Gillie.
And for those who aren't aware, BYD is actually the largest electric car manufacturer in the
world, more so than Tesla.
They're based in China.
And it's showing that Nvidia is not really country, they're kind of country agnostic,
right?
If you want a self-driving car, come to us.
We got you.
Nvidia or Jensen just doesn't care if he aligns with China or not.
He's just like out there to expand a video into anyone in everyone's hands.
As you said, BYD is the biggest EV maker.
They sell more cars than Tesla every single year.
And so that distribution, like think about that, like imagine you put your self-driving
model into as many cars as possible.
It's probably going to get smarter way, way quicker because it's just inside more cars.
So that's real competition against Tesla from a competitive mode.
The other thing is he's also integrating into Uber as well, right?
So it's going to be launching in 28 cities by 2028.
So through the end of next year, which seems like a long time.
But that's a lot of cities.
And Uber has a lot of reach when it comes to just a driving network in general.
So this is a really cool announcement.
I don't quite know if it's Apple staples with Tesla full self-driving.
They own the end-to-end stack that Nvidia doesn't really have that.
This is more of a thing that you can kind of attach on to cars.
And if I had to guess, this is not just me being an Elon fanboy, there's a lot more
friction that Nvidia will run into.
So I don't think this is a direct one-to-one competitor.
This is a key difference.
If I'm a Tesla shareholder, I'm not really nervous about this because like you said,
Tesla owns the full manufacturing stack and they have millions of cars on the road that
are full self-driving capable today.
They're just one software updated away from cracking that.
And that final software update comes when the legislation passes.
That is to be determined.
But they're there.
They're ready.
Waymo and I guess Uber now are kind of on the other side of this where they they've
kind of perhaps figured out the software stack.
They're close at least.
But they have nowhere near figured out the manufacturing stack for this at scale.
And manufacturing, as we know, designing hard things in the physical world is hard.
And that's going to slow these companies down a lot.
So I think for Uber, this is probably the best-case scenario.
They finally have a saving grace.
Someone who wants to actually work with them to help deploy the full self-driving vision.
But they got a long way to go.
So it's nice that they're trying.
This is kind of like Apple CarPlay, but for full self-driving where they're not going
to make the cars.
They're going to sell you the software to put in the cars and hopefully one day make
them full self-driving.
So we'll see how that goes.
That was the first of the robotics section of this episode.
Let me introduce you to the unhinged version of this Josh.
So Olaf made popular in the fictional movie Frozen, came to life on stage.
What you're looking at is an autonomous, self-directed robot that runs on a video, I'm
not making this up, that runs on NVIDIA's Newton robotics engine.
It also runs on their Jetson chip as well.
So what you're looking at is a homegrown NVIDIA robot and product that is autonomously
interacting with Jetson.
I can't help but think that some of this must be scripted.
There's no way that the robot is this interactive and obviously it's been outfitted with the
look of this Frozen character, but pretty cool all around.
I don't know if this is going to be in everyone's home, I don't know what the point of this
was.
Maybe they're going to sell rights to Disney or something, but yeah, I don't really have
a strong take on this.
Yeah, well, it's just, I mean, it's more of the direction that they're heading towards,
which is real world physical AI, right?
It's like we're in self-driving cars, now we're going to get robots.
They're creating these small packaged computers to put into these things that are creating
the entire stack.
NVIDIA is becoming the Tesla for the general purpose company.
It's like if you can't build it all yourself, NVIDIA has done it and they will sell you
all of their hardware, they'll sell you all of their software, they're moving a lot
into open source.
And I guess that's probably the transition to the next announcement, which is their NEMA
Claw announced by the OpenClaw competitor, which isn't an OpenClaw competitor at all,
actually.
It's just a basically enterprise solution for companies that want to use OpenClaw.
So the founder of OpenClaw, he was there, Jensen gave him a shout-out.
And basically, NEMA Claw is a way for companies to deploy OpenClaw in a more secure way and
to run on any coding agent and deploy from anywhere.
And I think a lot of people, I mean, ourselves included thought this could be competition.
The reality is it's complimentary.
NVIDIA wants OpenSource AI because they want to build the hardware that you use to run
the OpenSource AI.
And it seems like this was kind of like a win for everyone, including the open source
community.
It was pretty cool.
And Pete Stuy, as you mentioned, the founder of OpenClaw actually worked with Jensen and
the NEMA Claw team for months to build this out.
Their target market are enterprise customers specifically because when OpenClaw went viral,
it went viral because everyone could spin up their own personal agent.
There was one glaring issue, loads of security issues.
So people could lose money, expose their credit card details, or lose all their personal
data, or have people hack their computers.
Not good if you are an enterprise company, but people still, but companies still want
to get access to this thing.
So Jensen kind of dreamt up this platform that sits on top of OpenClaw.
So it works very synonymously with it.
And now you can kind of use OpenClaw without any worry.
You can spin up an agent that does a particular enterprise workflow, or you can use it for
accounting, back office stuff, whatever you can dream of.
It's now safe to use.
And it's OpenSource.
Just great.
Yeah.
Okay.
So two more things.
We have two more quick announcements.
One, AI and space.
Space GPUs.
So Jensen got on stage.
He said, we are going to build Vera Rubin for space.
You're going to have Vera Rubin or bring the Earth.
It's going to be in these data centers.
It's going to be fantastic.
And then he says, well, we're not quite sure how we're going to do it, but we're going
to do it.
They still have a lot of issues that they need to solve.
One of which is the cooling.
One of which is solving the radiation.
There are a series of issues that are going to need to be solved.
But there is the intention to do this.
And I suspect he didn't announce that here, but I suspect they're working with SpaceX
to design these chips hand-to-hand because that's really the only company that's going
to be getting these things up in space.
And I think it's really exciting.
When we think about AI data centers in space and the quality of the Vera Rubin chip architecture,
bringing those two things together and getting them in orbit by 2027, 2028, maybe the latest,
that's going to be pretty cool.
That's going to change the game.
Elon is incredibly bullish on this.
He thinks that SpaceX is now going to flip every company in the world when it comes to
AI development.
And he might not be wrong because if he can get these chips at scale from Jensen,
send him up into orbit, lower the cost per watt to be a small fraction of what it is
today.
It's a huge upgrade.
I feel like this was just a custom announcement for Elon Musk for one individual.
He's the only guy that's really trying to launch GPUs into space at scale.
In this demo, he's demoing it using one of Nvidia's investment portfolio companies,
Star Cloud, which are the initial start-up that made GPUs in space a trend, a thing.
But then Elon jumped on the wave and completely took it over.
And he's the guy that's actually going to be economically able to launch these at scale.
So it's a good day to be a Tesla SpaceX share or an Aqua Toyota.
And the final announcement that we're going to talk about is the DGX Spark.
They released the new Spark.
And it's now looking like it's going to be priced around $4,700, which seems high.
But if you are someone who runs local inference at your home and you're considering buying
a Mac studio or something to run these tokens on your own, perhaps you have an open-claw
instance you want to run local AI, this is a pretty compelling option.
They're basically taking a GB300, which is the Grace Blackwell chip.
And they're turning it into a tiny little thing that fits on your desk.
That's 750 gigabytes of coherent memory and 20 petaflops of AI compute, which allows
you to run models up to a trillion parameters right from your desk.
So it's an unbelievably dense machine.
In fact, if this was released probably even five years ago, this probably would have
been the most powerful supercomputer in the world.
And now it's compressed down to something that fits on your desk.
So it's a testament to how much efficiency improvements have been made every single year.
And how powerful the NVIDIA brand is, man, there's no one else building stuff like this.
No one's even close.
Looking at this holistically, this is a home run for NVIDIA, for shareholders, for investors,
for the AI industry.
Everyone wins because NVIDIA is just running full-tilt.
It's funny.
You said that back in the day, this would be so much more expensive.
You would also need like a dedicated, like, server room to fit this entire thing.
And now you can just sit it on your desk next to your laptop and have, in Jensen's words,
an AI supercomputer in your house, super cool.
It comes shipped with NemoClaw as well.
So you get two products, two NVIDIA GTC 2026 announcements for the price of one.
And you said it was 4,700.
Josh?
That's super cheap.
That's looking like on their website, right?
I think that's for the Spark.
I think that's for the Spark.
That is for the Spark.
That's for the Spark, yeah.
And they had a separate announcement, I think, on the DGX station, which is like the
more powerful supercomputer, which also consequently sits on your desk as well.
So just two different price points, but two very powerful things.
Yeah, just a home run for a video.
Yeah.
What a great presentation.
That is everything those that highlights.
I would love for you to share which part you are most excited about.
Is it DLSS5?
How many gamers are here that actually care about the stuff?
Do you hate it?
Tell me.
Because I don't.
I think it's cool.
I don't know why the artists maybe don't like their art being digitally enhanced, but
I have it.
I have good news.
You could just turn it off.
You could just play the vanilla game, too.
That's also cool.
So I'd love to know what people are most excited about here.
I think for me, Space Data Center's man, that's my favorite thing in the world.
I want to see AI in space.
It's going to be the DLSS5, but used for robotics.
I'm nutting out over robotics right now, because I think they're going to have that
chat GBT from 2022 moment.
At any point this year, they're getting good enough to move around, run, lift heavy items.
We just need a good model.
I think having something like DLSS, this code name is so DLSS5 kind of like expand robotics
models is really exciting for me.
One thing for sure, the naming of all of these is going to continue to be absolutely
horrific.
Amazing.
Oh my God.
But yeah, that's the wrap.
Thank you so much for watching this recap on NVIDIA's GTC.
I hope you enjoyed it.
We pursued through two hours of pretty boring presentation to bring this to you.
So hopefully it was a little more interesting, a little more exciting, very technical,
Jensen's technical guy.
I hope you enjoyed Ejaz's leather jacket that he's rocking today, an honor of Jensen
and NVIDIA who don't want to be watching this on stage.
I know you all.
I hope you appreciate it.
Yeah, as always, please don't forget to share with your friends, like the video, subscribe,
leave a comment, rate us five stars, all the great things, thank you so much for watching
and yeah, we'll see you guys in the next episode.
Limitless Podcast



