Loading...
Loading...

This video is sponsored by METER, the company building networks from the ground up.
METER delivers a complete networking stack, wired wireless and cellular,
in one solution that's built for performance and scale.
With METER, businesses get fast, secure, and scalable connectivity
without the burden of managing multiple providers or tools.
METER's single networking stack scales from branch offices, warehouses,
and large campuses to data centers.
METER's integrated network stack is designed
to give you deep control and visibility.
METER's full stack solution covers everything from first site survey
to ongoing support, giving you a single partner for all your connectivity needs.
Thanks to METER for sponsoring.
Go to METER.com slash heavy strategy to book a demo now.
That's M-E-T-E-R dot com slash heavy strategy to book a demo.
Hello everybody, I'm John A. Till Johnson, CEO of Numerdees,
here with my co-host, John Burke, CTO of Numerdees.
And we are here with the show that tries to ask the right questions,
not give the right answers, that would be heavy strategy.
And on today's show, what we wanted to talk about was what impact, if anything,
the semiconductor strategies that were put forth about two months ago at the CES show
should have on IT strategies.
And specifically, we're talking about folks like Nvidia, AMD, Siemens,
where they're headed.
Is this something that IT needs to care about?
What should you be thinking about?
I would say the answer is yes to the first, very simply,
because a lot of the semiconductor folks are starting to talk about
putting together frameworks and making open source code available
and all that fun stuff.
So we're going to be analyzing those.
And we have some thoughts on what it means for you.
And particularly, John has some thoughts.
So, John, what are some of your takeaways?
I have thoughts.
I'll start with Nvidia that was definitely the most consequential thing
that came out in those keynotes.
In some ways, anyway, it was the most watched,
because Nvidia's role in the AI economy is so central
and because as an economic force, they're so huge.
It seems like the right place to start.
And Jensen, if I can call him Jensen,
spent a fair amount of his time framing AI
as the unifying, empowering layer underneath a new generation
of applications that applications would become things
that basically we trained software to do
rather than things that we wrote compiled
and thereafter executed.
And the first thing I thought when I got the gist of what he was saying
was this is such a massive waste of resources.
If you just look at how much energy it takes to be a CRM,
written for being a CRM versus being an AI masquerading
as a CRM, that it's, if we tried to do it
in any short time frame, it would be absolutely unattainable,
unachievable because we just don't have the compute power
or the electrical power to make that kind of shift happen fast.
That's an interesting one, John.
And actually, I hadn't put the pieces together
until you said that just now, this idea of AI is the unifying layer.
Certainly, I still have very real questions about the ROI of AI
for most of the things people use it for at this point.
And you saw that Cisco was talking recently
about having two more layers on the OSI model.
Yes, yes.
Yeah, which is, I think we talked about that a bit on network break
and we kind of concluded, nice try Cisco, but the time has passed.
You know, the OSI model was never very good at understanding
what happened at application layers.
And quite frankly, I wouldn't expect Cisco to be the go-to source of that anyway.
But yeah, we kind of agreed that no, it's not necessary
and there are other entities that are worried about the protocols
that are needed to intercommunicate.
And I do understand just continuing that side note for a second
that the people that actually understand how communications works
are, in fact, the network vendors.
So Cisco does have a real play here.
It's just I don't think any of the AI people
and any of the applications people are going to allow
that that knowledge exists outside their little golden globe
of whatever it is.
So, but it is interesting to think about the fact
that if AI is the underpinnings of everything,
rather than the way I kind of think of it,
as a bolt-on to most applications today,
that is a fundamental shift.
It is the most fundamental shift ever contemplated
in computing as far as I can tell.
So, you know, let's rush in.
Encouragingly, although he talked about this for quite a while,
he barely mentioned security or having any kind of controls
that wrap around this stuff to make sure that it's doing
what you asked it to.
And that leads right into the second biggest takeaway for me,
which is that he considers agentic AI interfaces
to be basically the user interface of the future
that anything you interact with is going to be agentic AI basically.
And to me, that especially coupled with the lack of focus on security,
let's put it that way, is really frightening to me
and the lack of assurance that what you ask it to do
is what it actually does.
You know, it basically, it feels like he's saying,
let's all go vibe-ops and just kind of groove our way through the day.
Yeah, yeah.
And I'm still circling back on your point about the cost,
the energy and power cost of doing it.
So, essentially, we've got no security, no real answer on where,
where, whether how we're getting the security, the power
and the energy for this.
So, those are two kind of big-ish holes, I would say.
I have to agree.
Now, a real booster would say, well, yeah,
but, you know, the new generation of processors is,
I don't know, four times as power efficient or something like that,
depending on what you're doing, maybe ten times for some things.
But there's lots of good reasons to push back on that.
The first one being, what about all the existing built-in infrastructure?
How fast can that all be replaced with the new generation of stuff?
That's one.
Two, his vision is that the models continue to become more demanding.
So, most of that new capacity from the new generation of stuff
gets eaten up by just increases in the number of parameters, et cetera, et cetera.
And third, and this was another major thing
that he spent a lot of time talking about,
is he wants to add a whole new third tier of activity
to the AI infrastructure.
You know, in the past, we've been accustomed to talking about
the training infrastructure and the inference infrastructure.
And generally, they weren't the same things.
They were being done in different places at different times, right?
So he wants to add a third whole category of hardware
that goes into production use of AIs,
and that is the simulation layer, or simulation pillar,
or pool of simulation resources.
However you want to think about it,
he spent a lot of time developing the idea that there was never going to be
enough training data to train AIs to be what we want them to be.
So we have to use simulation to fill the gap.
Which, yeah, I have a huge problem with that
because I'm going to leap ahead to my main area
where I don't have a problem,
which is there was an awful lot of talk,
particularly, but not exclusively from Nvidia and Siemens,
about integration with IoT and the physical world.
Here's my problem with the simulations.
Nowhere in there was Jensen talking about integrating
with the physical world to get physical feedback on those models.
So essentially, what you're saying is,
oh, now I'm just going to get my data,
my canonical source of truth from simulations.
Well, that's bonkers. That's insane.
You know, it's like saying,
I'm never going to have my pilots fly in the real world.
I'm just going to train them on simulators, and they'll be just fine.
Simulators really matter.
It's really important.
But by their very nature,
simulations cannot simulate the entire real world,
because there are massive quantum effects, for example,
in the real world, that we don't fully understand
and that actually may play into any given experience
at any given moment in time,
and a simulation won't have those,
because they're not built in, because it's a simulation,
which is why I've always been pounding the table
and saying, look, we need quantum data input
before AI is really going to take off in the simulation area.
Now, really take off.
I mean, in the hardcore stuff, where it's life or death,
that you get it right, you know, developing molecules
and things like that.
You know, one of the things they talked about was,
and I think this was actually,
Siemens rather than video,
but they talked about like, they used Pepsi Cola,
in their manufacturing facilities.
The world is not going to end if you get a Coke bottle
or a Pepsi bottle that's like somehow malformed, right?
That's not the end of the world.
But when you're talking life or death situations,
yes, that's a huge deal.
So, yeah.
Driving a semi through my downtown,
definitely don't want that to be simulation based
in any major way.
Yeah.
In any major way.
I will say that Johnson did talk a lot
about the need to scale IO in order to enable scaling,
which I thought was very interesting,
particularly for those of us who come from a networking perspective.
It's like, oh, good, thank you for not forgetting
about what it takes to get the data in and out.
That's always a good thing.
And I want to come back to that one.
Okay.
But following the simulation thread,
it is also pointing back to this sort of the nature
of the beast for these LLM models,
which is that, and despite the boosterism
of people that we've worked with in the past,
for these things actually developing
an actual model of the world internally
that is not what they were programmed with,
we keep coming back to the fact that the models
see every new situation as a new situation
and really new to them.
And even minor variations from situations
that they've been trained on can lead to them
having no idea what to do.
It is a lack of the ability to properly abstract
and generalize and act on those abstractions
in generalized circumstances that human intelligence
and I'm going to go so far as to say,
most animal intelligences have at some level.
You don't have to train a cat on every new kind of door
once they get the idea of a door.
They understand that you can walk through the middle part, right?
If you change the shape of the door,
if you change the size of the door,
that doesn't stop them from knowing
they can walk through the middle of it.
And just a side note,
I know I'm interrupting your train of thought.
But do you remember which science fiction it was
that started with the door dilated?
I think that was...
I want to say it was Highline, but I...
Oh, it was definitely Highline.
I think it was the one called Gulf.
But I'm not positive.
The only reason I'm bringing that up
is when you were describing the cats
not being flustered by doors.
Yeah, but if they...
I think my cats would...
My cats would actually be spooked by a dilating door.
They'd be like, what the hell is that?
And then they would practice jumping
into just the right point to make it open and close
so they could have fun with it.
One of them would...
The other one would try to destroy it.
That's what he does.
Anyhow, so back to...
That lack of real intelligence and simulation
basically happening during inference
in real-time simulation filling in the gaps.
A, again, a huge amount of infrastructure
and RapidIO required to make that possible
and just pointing up the real limits
of this style of AI.
Well, so let me come back to something you raised, though,
because I like the idea of the three pillars.
It's like training inference and simulation
as at least as places that need chips,
which is legitimately videos concern.
What would that actually mean for, for example,
cloud architectures for technology folks?
Like we all kind of know that the training gets done somewhere,
you know, a training gets done in location A,
the inference gets done in location B,
one can be a private cloud, one can be a public cloud,
where would the simulation live?
I have to assume that at a minimum,
it would be living adjacent to the inference cloud
and able to transfer data at high speeds between them,
so got to be really cohabiting with the inference cloud.
Which would, I suspect it's going to have to be there
in the training infrastructure as well.
You're going to need a separate pillar there,
because simulation is going to be part of the training process, too.
So that means if you're, for example,
designing data centers to handle your anticipated
huge volume of AI,
because you're really right about to hit the point
where you're going to see a real, real significant ROI,
you're going to have to basically double
whatever it was you were going to build out,
because you have to support this entirely new pillar
that didn't exist before.
Potentially, yeah.
So that's one of the applications,
or one of the implications that may be coming out of this.
Yep.
And, yeah, not just the compute pieces of it,
but also the networking pieces.
Yeah.
Yep.
Yep.
And all of that wraps back into those
agentic models that are serving as user interfaces
for everything.
And our, by Jensen's own admissions,
still facing the challenge of hallucination
and learning to do research
to make those instances less frequent.
So understanding that they don't know an answer,
going and doing more research,
and then coming back with the answer they find in their research.
But he sort of danced past,
without ever directly addressing what it's going to do
when there's no answer to be found with research.
When the answer is still, I don't know.
Well, and I would weighted heavily towards giving you an answer
rather than saying, I don't know.
Well, and I would also say,
I mean, this is one of the things we know
is inherently built into the flaw.
It's a flaw that's built into the model.
All you can do is twiddle with the weightings.
But if you don't know,
if you, the twidler,
don't know a priori,
which source of truth should be considered
the canonical source of truth
for a particular question,
you cannot possibly train the AI
to do the right thing
when confronted with that question.
So, you know,
and this is my problem
with the entire idea of LLMs
as being in some form actual AI
or more like AGI.
It can tell you what everybody else says.
It can tell you what the most common answer is.
It can't tell you what the right answer is
because there is no,
there's no conception of right.
There's just that which is weighted higher
and that which is weighted lower,
which is good enough for a bunch of purposes.
You know, if you're trying to figure out
how to draw, paint a picture,
you know,
animate, animate a movie,
write some code,
you know what?
What everybody else has done is just fine.
You know, probably ideal.
And I think when you're talking about questions and answers,
hallucination is one kind of problem
when you're talking about as he framed it physical AI,
where the AI is embedded in
and doing things with physical reality,
what form a hallucination takes
is frightening to contemplate.
Are we going to, you know,
hallucinate a traffic jam?
Are we going to hallucinate...
That has actually directly happened.
I mean, when the power outages happened in San Francisco,
all the electric cars stopped dead
because they didn't,
their understanding of this is an intersection
comes from there are lights there.
So you didn't know what to do
when the lights went away and they froze.
And then they created massive traffic jams.
So if you arrived at that simply by logic,
I'm happy to provide the actual data
because that happened about six weeks ago.
I want to say.
I missed that one.
And golly, I wish I hadn't.
That would have,
that would have,
even more confident in the assessment
that we're nowhere near this point yet, folks.
Let's not do this.
Dude, man,
you got to be reading our advisories
when we send them out.
I think we wrote an advisory
and it's on our substacks, so.
Oh, yeah.
Well, it's there.
But yeah,
there were some other things going on.
I want to say it was late December early January.
There were stuff happening.
We've been distracted around here.
That's for sure.
Yes.
Yes.
So anyway, so that's,
that's the sort of thing.
And also, you know,
if, as Siemens,
we mentioned to them,
as Siemens posits factories being entirely AI controlled
with layers on layers of AI,
talking to each other
and agents interacting
if hallucination in the agent
can result in breaking,
you know, not just a computer
or a bank of computers,
but a whole modern manufacturing facility.
If software errors can turn
your new Pepsi bottling facility
into a pile of junk,
then you've got to treat that issue
with an enormously higher degree of concern
and preparation
and layers of non-AI security
wrapped around it somehow.
That said,
I do have to give Johnson credit
for when Siemens came up
and did their keynote.
One of the things they stressed
was that they had kind of a bidirectional
working relationship.
In other words,
Nvidia was developing a lot of the code
that Siemens then used
in order to enable
the manufacturing of Nvidia chips.
So in some sense,
Nvidia is writing the code,
but they're also customer number one
of their own code.
And, you know,
Johnson even said,
look, if there's a bug,
if it comes from you guys,
I fix it first,
because I know it's going to affect my chips
and my manufacturing plant,
which I do want to highlight in a second,
but I want to circle back to,
like, what does it mean if you're in IT?
So number one,
when it comes to security,
or cyber security,
you're on your own,
we'll talk about,
in upcoming episodes,
we'll talk about what it takes
to fully secure AI,
or what can be done,
at least.
Number two,
nobody's really solved the energy problem
and the answer to everything
is we're just going to scale up
and up and up and up and up and up and up.
And everything that people are talking about
means that whatever your design parameters are
for data centers,
if you're hosting your own,
I'm probably going to have to get revised upwards
if you're seriously planning to do this.
But the last thing that we didn't mention
is very practical and tactical.
Nvidia is really getting into the software
and framework and platform business,
not just the chip business,
not really as a business.
They're writing code and developing platforms
because A, as they said,
we need it to keep developing more chips
and B, we are sort of trying to prime the market.
You need the software
that will consume our chips.
So here,
so they're doing an awful lot of open source stuff
out there and it's worth trying to find out
what they're up to
if you're planning a major AI initiative
because chances are,
they may have done a lot of the development work for you
and there's no reason not to make use of it.
You know, most of us kind of divided the world
into chip makers and AI,
you know, chip makers like Nvidia
and then AI makers like OpenAI
and Anthropic.
But actually what Nvidia said
was we're sort of drifting into their space
not commercially,
but in order to drive the market.
Meter delivers full stack networking infrastructure,
wire, wireless and cellular
to leading enterprises.
Businesses are frustrated
with unpredictable pricing,
IT resource constraints,
complex cost prohibitive deployments
and fragmented tools.
This makes it difficult to achieve the performance,
reliability and security
that modern IT and operations demand.
Alongside their partners,
meter designs,
deploys and manages
everything required to get
performant, reliable
and secure connectivity in a space.
They design the hardware,
write the firmware,
build the software,
manage deployments,
and run support.
It's a single integrated solution
that scales from branch offices,
warehouses and large campuses
to data centers.
And that includes everything
from ISP procurement,
security, routing,
switching, wireless,
firewall, cellular power,
DNS security,
and VPN to SD-WAN
and multi-site workflows.
Go to meter.com slash heavy strategy
to book a demo now.
That's M-E-T-E-R
.com slash heavy strategy
to book a demo.
Given how they think about things
and frame things,
though, I would be
a expecting an enormous
amount of that code
to have been written by AI
under human supervision
with human reasoning.
And be for it not to have been
as thoroughly vetted
from a security standpoint
as one would want.
And that, therefore,
if this is the sort of thing
your organization is
going to bring in house,
subject it to the same kind of rigorous security analysis using tools
that you should be applying to any big open source piece that you bring into your infrastructure.
Do the static code reviews, do the dynamic code attacks,
and really beat it up to try and make sure that it's secure enough to make a pillar of your business.
And that's a thing I think you're going to have to bring to the organization broadly
because you're going to get lots of enthusiasm from the rest of the organization
to adopt the gosh-wow technologies as they're being pushed, advertised,
and speculated about by the industry as a whole, by the stock market,
by everybody who's got a stake in video succeeding, be the person who asks the difficult questions
and doesn't accept the hand-waving answers because you don't want to be the person that says,
no, we can't do that. Being Dr. Noe is never a great position for IT
when you're having strategic direction discussions. Instead ask the hard questions
that set a timeline for being able to adopt something rather than a fence in front of being able
to adopt something. And yeah, I couldn't agree with you more, John, and just to reiterate
translating all this into very practical moves. If you're doing a major AI initiative,
check out what Nvidia is doing as well as everyone else just because they may have a platform A
and B, run them through just the same rigorous security and fit for purpose vetting
that you would run anyone else through which most folks are actually not doing. This is a huge
issue right now. John, you're saying this like, yeah, just run it through the same security process
that you'd run any other third-party open-source code, but realistically, even the folks in the
financial universe are still grappling with the implications of what's happening in the open-source
universe and what it's doing to them. Some financial firms have long just said we don't use it
for that reason. I'm not sure that's the right answer because you can buy code that has bugs in
it too, but not as often the source code, but yeah. But yeah, exactly. So yeah, those are two major
things you can think about. Also think about revisiting your data center specs if you're planning to
have any major computing computation of AI internally, you're probably going to have to think about
if the simulation thing takes off, you're going to double your requirements because you can't
just plan for inference and training anymore. If you're in the position where you're going to be
buying some of the new generation of Nvidia stuff, keep in mind, and Wang talked about this,
that they broke their long-standing policy of not changing multiple basic technologies at the
same time in a new generation of GPUs. They basically change to all three major chips that go
into these things. That's a thing that should just give you some pause and make you think about
what kind of track record of not screwing up you need to see before you make the investment.
Because some of us are old enough to remember way back when when Intel shipped CPUs that couldn't
do floating point math correctly. And it wasn't like they couldn't do any floating point math correctly
is just that under not sufficiently rare conditions, it would not do the math correctly. And so you'd
get errors in your spreadsheets. You'd get errors in your machine tool controls, whatever you
happen to be using the processors for, there were problems. So yeah, anytime somebody makes this
kind of a change to their standard operating procedure, be a little more cautious before buying.
Yeah, and I love this. Talk about being old enough. Every time it happens every single freaking time
that somehow we managed to suspend the core laws of XYZ PDQ. And I remember during the
the dot com boom, all of a sudden it was like, oh, it's not about having revenue or customers,
it's about having eyeballs. And it's like, well, no, because last time I checked, you can't pay
the mortgage with eyeballs. I mean, maybe there were in some sort of brave new world. And so it's
sort of like, oh, well, it used to be that you only change one thing at a time and test the
Jesus out of it before you change the next thing. We're past that now. I'm going, yeah, no,
I've heard this story before and it doesn't end well. Yeah, I, yep. And that is a natural
segment to talk about AMD a little bit. AMD has also been doing that. They have also been doing
massive chip co redesigns as well as node and rack architecture changes to generate their new AI
super cluster designs. And a reason to be cautious there. Yeah, well, and I also want to sort of
zoom out a little bit and say if you look at everything that AMD Nvidia and Siemens have been
talking about rather than individually one by one, one of the things that leaps out at me from
an IT perspective is their great willingness to partner with customers in a way that maybe
you didn't expect from a chip vendor even two years ago. So that's another takeaway. If you're
planning something that's mission critical that's going to involve AI, before you go out and
sign up the, you know, super expensive designers and architects and super expensive consultants to
hold your hand. And, you know, super expensive software vendors talk to the chip vendors and see if
they're willing to partner with you in a way that makes more sense because they are learning from
you as much as you're learning from them and keep in mind that these guys and gals have some of the
smartest people on the planet whose job it is simply to help you use more AI so they can
sell more chips. So it may be a much cheaper way to get that consulting expertise than paying
big ticket consulting folks. Agreed. And if you've got the ability to absorb the risk,
basically the potential returns, and I'm thinking of the use cases that Siemens brought forward,
the potential returns are very large. They're significant. They're double digit reductions in
time to produce or overhead to produce. So there's a reasonable return on the risk if you can,
if you can. Well, I'm going to push back on the, if you can front that risk because quite frankly,
John, I think you're taking a much bigger risk buying off the shelf chips, going out and talking to
a bunch of consultants, going out and talking to, you know, buying a third party software and then
trying to glue that together yourself, that increases the risk in my book. If you are big enough or
interesting enough to get the full attention of the chip vendors, I think you're reducing the risk
because if you find a flaw that is actually all the way down in the hardware, you can get it fixed
much faster. Plus, they have the experience and they have been working at the software layer.
That's kind of the message that all three vendors have come out with. So I don't think you're
increasing the risk. I think the, because you're not increasing the risk over buying off the shelf chips
is what I'm saying. Yeah, it's, it's the increased risk of being on the development edge of these
things as opposed to the acquisition after they've been released edge of things. No, no, no. I think
we're talking about two different things then. Basically, if you have a project, what I'm saying
is if you have a project, take to the chip vendors first and say, Hey, does this, there's a shape
of my project. Match anything you guys are doing and are you willing to partner with me with the
understanding that the end result will be more chips are purchased by you or somebody else if they
can cookie cutter your use case versus saying, I'm going to have a project and now I've got to go
buy chips and I've got to go buy software and I got to go get consulting and sort of
pulling that all together yourself, which I would maintain is more risky than talking initially
to the chip vendors. Potentially, I just, I think the number of things in which the chip vendors will
be interested in investing and then able to invest based on how they've got to allocate their
resources relatively small. So definitely try that first. That's what I'm saying, but I'm not saying
it's a risk your path. I'm saying you were both violently, we're both violently agreeing that you
may or may not get their attention because you have to have an interesting problem. It's either
got to be interesting or you've got to be a big name brand, marquee customer. Ideally, you're both,
but if you if you are that, you are actually reducing your risk by working collaboratively with
the chip vendors as opposed to trying to do it all yourself. Yeah, okay, that's that's what I
wanted to clarify. And before we wrap up, because I know we're getting kind of close to the end,
I do want to circle back to the whole one second, John. I'm sorry, I have to put my cat on my lap
because if I don't, he will spread my legs and more importantly, my, my pants, which I would
prefer he did not. So now he's on my lap and he'll be very good. Okay, back to, back to IOT and
digital twins, I was fascinated by the fact that Siemens is putting a lot of emphasis on digital
twins. And while they spoke as though digital, this is strictly an AI initiative, it actually isn't
digital twins can be enabled via AI to a certain point are being enabled via AI in fact,
but are also also canon are being enabled by other technologies such up to and including things
like quantum computing again. So the reason I'm bringing this up is I think sort of like it was
one of those things that was cool until about 2023 when AI took over everything, but digital twins
are continuing to grow and continuing to prove extremely useful in quite a lot of scenarios,
particularly but not exclusively manufacturing. So you may want to, for example, touch base with
the chip vendors to find out more about what they can tell you about their digital twin initiatives
and get some pointers on that if you're looking to launch a digital twin initiative.
Because that is highly strategic for Siemens. So I think if you call them, they'll probably have
something to tell you. And for pretty much any kind of industry that Siemens has a lot of business
with, I think digital twinning is almost always useful to at least contemplate or at least
contemplate the utility of let's put it that way because it's likely to be useful. So on that,
I think yes, I think it's great also to be reminded that a lot of the things that we called AI,
you know, machine learning related AI, algorithmic development, AI, predictive modeling, all that stuff,
we stopped talking about as AI as frequently once LLMs came to dominate the conversation.
And Siemens is very clear that this is a continuum and they're still working all along it.
Using the parts that make the most sense for each piece of the each layer in the cake, let's say.
Yeah, and I would say of all the chip vendors, Siemens seemed to make the most sense when it came
to talking about its AI strategy because, you know, the keynote opens with, look, we've been doing AI
for 50 years. If you can do the math that says, you know, that was well before Sam Altman was
even a glimmer in his father's eye. So clearly this definition of AI is much broader and more
and all encompassing and more similar to, frankly, John, what you and I studied when we were studying
computer science and in grad school. So yeah, I think separate from AI, it's interesting to note
that the digital twinning thing is proceeding a pace is very relevant if you have an industry where
there are significant physical components, whether it's manufacturing or something else, distribution
logistics, definitely worth thinking about. And kind of the message here is go talk to the chip vendors.
While you're before you start a major project, go talk to the chip vendors, see if they can give you a
hand because you might be surprised. And I think that's kind of the big, big, big meta takeaway is
these guys are dabbling in areas outside of Silicon, which may or may not be a good strategy for
them, but it's definitely a great strategy for you because it means free stuff, where stuff is
not Silicon, but help software, things that you would otherwise pay for, that if these guys are
in the business of selling you Silicon, they're going to consider these are just freebie giveaways.
It's a great way to get yourself launched for less money by talking to these vendors.
And speaking of, you know, speaking of the very meta view, I would like to highlight
Jensen also mentioned, I guess because metaverse is taken, you know, thank you very much,
they they came back with the omniverse, which is basically the simulated reality, which is like,
oh guys, it's starting to get ridiculous at this point. Like yes, I have always believed that
simulated realities will serve an absolutely wonderful purpose, not necessarily what the chip
vendors are talking about or the AI vendors are talking about because I don't think simulations are
great to train algorithms for the reasons that I've given, but they're still fantastic places to
interact to brainstorm for humans to connect with AI's and to connect with each other. So
the idea of a metaverse is a good idea. I guess in video agrees and that's why they came out with
the omniverse. I don't know. Metaverse definitely got a bad reputation because the gun was so
thoroughly jumped during the COVID thing. Yeah, well, you know, you can't always be right. So
that's that's kind of my takeaway. I mean, I can see some real implications for IT folks. I guess
if you're listening to this and you're like, what are the questions I should be asking? It's like,
hey, Chipminder, are you doing anything in XYZ, PDQ? Because my company is thinking about getting
into it. And don't assume that if you're just because you're small, they won't be interested because
if you have enough of a cutting edge use case, they might. And you know, Nvidia might give you a
pass, but AMD might say, hey, let's or or semen's depending on what you're doing. And then John,
I think the other great takeaway is if people are handing you open source code, don't, you know,
just because it's AI doesn't mean it's bug free or security uncompromised. Looks to give course in
the mouth. Definitely. Many times. Yes. Yes. And I think that's about it, right, John?
For today. Okay. Well, thank you for listening with us. As always, if you want to give us,
we love follow ups. So just hit hit us at packopusures.net slash follow up. We'd love to hear from
you about your own experiences, particularly if you have worked with any of these vendors,
also just a heads up at packopusures.net. You'll find the network break podcast, which John and I
are both on sometimes at the same time, but rarely also helps you stay current on the latest tech
news. And also any interesting surveys that packopusures has done recently, they did want to
around the around the January timeframe that was on the salary survey. So you don't have to give
any any information. You can just go ahead and download it. So check it out. There's cool stuff
out there. And if you want to talk to me or to John or to read more about what we what we
regularly write about, you can reach us at numerodies.substac.com. That's our substac publishing
arm for the company. And there's a little tab up there. You can click and it says contact us.
And that goes directly to us. So please reach out. We'd love to hear from you.
