Loading...
Loading...

All right, Reed, we never do endorsements, but for those of you who are watching on YouTube,
you may notice that I'm wearing a Patagonia and there's a picture of a mountain behind me,
which can only mean one thing that I'm at the Grand Canyon. So I'm just going to give a shout-out
to anyone. A lot of people have heard of the Grand Canyon, but it's really amazing. And you should
go and you should hike down it. And if anyone has kids, make their kids go on the hike. So
that is my endorsement for the day before we get into lots of AI news.
I think an endorsement of a national treasure is, you know, a good way to begin.
People have heard of it. All right. So recently, Samsung's Co-CEO TM Row announced at CES
2026 that the company wants Google's Gemini AI running on 800 million devices by the end of
this year, which would be double the 400 million that it reached in 2025. Samsung is adding these
features to TVs, home appliances. I feel like you do get some consumers that are a little annoyed by
that saying we don't want smart TVs or we don't want smart fridges. So it'll be interesting to see
what sort of the integration of AI into those smart appliances also means. But also consumer awareness
of Samsung's Galaxy AI brand has skyrocketed from 30 to 80% in just one year. And consumers are
using AI a ton on their phones to search for generative AI photo editing, real-time translation.
Actually, just yesterday, my husband gave me a photo of the Grand Canyon. I was like, that looks
amazing. He's like, oh, yeah, AI edited out all the people. So people are using this in real time
as it becomes accessible on their devices. Last week, OpenAI also became the first major AI
company to launch a dedicated, voice-based conversational app on Apple CarPlay ever mix. And they're
rolling out chat GPT as a hands-free voice assistant for drivers. So CarPlay 2.0 supports chat GPT,
Google Gemini, and Cloud. It is very clear that AI is coming to hardware. So my question for you
is how important is this AI integration at the hardware layer? And does that mean that whoever
owns the hardware actually ends up owning the majority of the value that AI creates as opposed
to the software layer? Well, I'll start with the simple, which is I don't think the hardware
ownership will dictate the greatest value in the AI layer. It doesn't mean that there isn't
a significant impact from it because when people buy a piece of hardware, that hardware is their
access for that, whether it's a car for the inside of the car operating or AV, whether it's a phone,
whether it's a TV. All of these things, that's the AI that they didn't get with that device.
And these tend to be big purchases. And the TV tends to be the central thing for the family.
The car or two is the transportation, etc. And so there is a significant kind of exposure,
value creation, value capture, generation moment there. And so I think it is important for that.
On the other hand, when you think about, for example, part of what I think AI should be looked
at is, what's the number of minutes hours of AI being used to create value?
And to some degree, when it's creating value for me, is that value more substantive?
And I think that's one of the reasons why value that, for example, comes through your chat
GBT app on your phone or through your co-pilot, clawed, etc. app on your computer.
Those things, I think, hours and hours and hours of interaction and things that you're creating.
And so they're on the more general platforms for this. And what's more, because the value of that,
and that's one of the reasons why, of course, Samsung's using Gemini and why OpenAI is integrated,
along with substantive ones like Gemini and Claude, into the hands-free voice assistant
for CarPlay, the iteration of these things into becoming more value comes out of the hours of
interaction versus the driver of, it's a commodity that I just happened to slot into our hardware.
And so that's the reason why it's kind of like it's not like, well, it's the hardware runaway story.
Now, that being said, obviously, it's part of what is becoming a much more mainstream adoption
of when it's just there. Now, I think people are still a little bit slow to what are they doing when
they're talking to their TV, they're familiar with their remote, etc. The AI is just, okay, play Netflix.
Find Wednesday on Netflix, like, okay, that's fine. And by the way, much better than the
the kind of remote experience, and especially when you get to like Apple TV remote, which is like this,
you know, simple and simply useless, you know, kind of interface point. But on the other hand,
the thing that makes AI valuable is not its translation moments of, oh, I can now hear you say
Netflix. And, you know, of course, it's better than Siri and it's better than Alexa and all
right. But it's like, that's not the thing. It's actually kind of a much more substantive set of
things that is in what you're creating and what you're doing. And the iterative cycle of that is
within the frontier models themselves. And that will drive towards, you know, kind of upgradable,
updatable, flexible hardware patterns, because there simply will be a huge amount of demand for
I want the one that really works here. And even if that demand is slow, because I don't realize that
I can say, hey, you know, Netflix, I like these 15, these seven shows recently. What are another
five shows that you show me that would be interesting? And that's obviously when it begins to get,
you know, kind of the beginning of much more interesting. And, you know, even when you're integrated
into, you know, hundreds of millions of Samsung TVs, that's still like something that we're building
towards where we're enabling the user adoption of even of the functionality is all essentially there
right now. Right. We're such at the beginning of this. And so as you think, like you spent your
career thinking around network effects, especially from a software perspective, but from a hardware
perspective, does this mean that like the model that is on 800 million devices, like, of course,
your phone, you couldn't use whatever app you want. But again, there is probably going to be some,
you know, preferential models that people use. There's going to be deals that are struck between
different companies. Does that mean that it's sort of game over for whatever model is on those 800
million devices, because people will be locked in? You know, at this point, there's many more devices
than there are people in the world, in a sense. And just because you have one device, like a Samsung
device, doesn't mean you don't have other devices. And this gets us, you know, there's different ways
of kind of understanding network effects. And you just because you're on a network, doesn't mean you
have a network effect. The strong and weak network effects, strong network effects are because I'm on
this network. I'm not on other networks. Weak network effects are on this network. And I can adopt
other networks like instant messengers, you know, people might be using signal, but also what's
happened also. I message and, you know, telegram, et cetera, I'm using the whole set. And so those
are weak network effects. It's like, I have a reason to stay on it when I do it. But my ability to
adopt new networks is just the cost of that. And so, you know, so there's those. And then there's
like, you're on a network and being on a network doesn't necessarily matter anything. And that's
part of the reason my answer to the earlier one was, look, there's more subtle networks. Like,
what is the reinforcement of how is the model getting better, which gets drive driven through,
you know, depth and engagement of use, which in the cases of TVs is likely to be low for
a number of times, even if you're on hundreds of millions of TVs or devices, you know, in this
case, but no, I think it will be more on phones. And so the Samsung phones, which, you know, I have
a couple of the, you know, they very nice Samsung phones, those will create, like, there's a
form of network effect there in terms of the learning and adoption. Like, there's one of things
that being in the search engine business is knowing what the query stream is, is a way of doing it,
the same thing in terms of like, how do you, how do you make an AI thing more magical? Also, how's
that learning? I mean, this is part of how, you know, the kind of question of figuring out good
sets of queries and good sets of answers is part of how AI is trained. It's part of how search
engines are, are, are improved, you know, et cetera. And so I think that engagement pattern really
matters. Now, that being said, it's a, none of this is the underplay that it's a distinct advantage
to be on a number of things, especially if like, it's kind of the equivalent of, hey, I'm using
this thing and I see how great it is. And that's part of like why, like everyone who actually,
you know, uses other models other than GROC realizes how bad GROC is because GROC trained to the
benchmarks. And, you know, but isn't not, is actually just not as useful on almost any vector
other than, you know, maybe creation of questionable pornography, you know, than, than any of the,
the primary models. And, and so you get exposure to that. So, say, for example, you're, you're
getting exposure to chat GPT through CarPlay and you go, well, this is really good. And that's useful
to have that, that exposition and, and, and, you know, then, then, you know, if someone is trying
to say, hey, I use this other model and said, and it's like a lot of queries, like, ah, I want to stay
with this. Plus, I'll get familiar with it a little bit in various ways. And then of course, the subtle
thing that might begin to get, it's not a network effect, but a sticky effect, is like, well,
it starts having memory. And it remembers you. So, like I've been driving for two years with
CarPlay and it knows, you know, what kinds of things they like. And it knows that when I say, play
the police, it isn't, look out for the police around me. It's, it's, you know, take this ban that
many young people don't know what it is. And, and, and, and, and kind of, you know, play it and
smoothies store and knows the songs you like. It knows it in the evening. You want to pick me up.
And so, yeah, that's super valuable. Yeah. So those things, I think, kind of contribute,
but they're kind of not exactly network effects and moving to the question that is on a lot of
people's minds. Everyone is talking about data centers. Alphabet, Amazon, meta, and Microsoft
are expected to spend more than 650 billion dollars in 2026, just this year alone to expand AI
capacity. An analyst estimate, though, that somewhere between 30 and 50 percent of these AI
data centers that are planned for deployment in the US will be delayed or canceled. And the reason
is electrical components. That is the bottleneck. Batteries, transformers, and circuit breakers,
which make up less than 10 percent of the cost to build a data center, but without which it's
impossible to build one at all. And so, lead times for high power transformers used to be around 24 to
30 months before 2020. But now that timeline has stretched out in some cases to five years.
So these construction projects, even if we get them on the ground right now, they won't be able to
help us for years to come. And so, across 140 construction projects, data centers representing
at least 16 gigawatts of capacity, they're slated to come online, but only around five gigawatts
are currently under construction. And at the same token, US utilities imported more than 8,000
high power transformers from China in 2025. And that's up from fewer than 1500 in 2022. So,
in leading this to say, we are importing some of the most critical components of our data
center capacity from China. And obviously, last week, we talked about the geopolitics of it all,
like, what does it mean for China to be supplying some of the most important things for our AI?
So, if we're spending 650 billion dollars to win the future of AI, but it is
fundamentally dependent on a geopolitical rival, what does that say for what we're doing? And
is this a problem as we try to stay on top of AI in a geopolitical sense?
There's various ways in which we have dependencies in the AI value chain, which is one of the reasons
why I think, you know, kind of call it a national policy of being less terrible or other kinds of,
you know, kind of puns on terrible. And it's not just the transformers, it's kind of chip supply,
which obviously is hugely TSMC, Taiwan dependent, it's adoption, which has other dependencies,
like if you go all the way to the, you know, kind of construction of the components of which,
you know, like the transformers are a kind of surprise thing to kind of chips. And then people
frequently under underwrite the networking infrastructure, you know, then you get kind of like data centers,
the composition of data centers, then you got the, you know, that kind of build out of the
compute infrastructure, then you got the models and the trainer engagement. So, you get this whole
thing all the way to people actually using it. So, I think there's a lot of different dependencies
and a dependency on a geopolitical rival is certainly worth paying attention to. But, you know,
if it's a little bit of the reason why, like for example, Nvidia, you know, kind of wants to have
its cake in either two, EG sell a huge amount of chips at very high margins, but also be the
builder and provider of, you know, AI models and so forth. And so it's kind of doing boasts,
but their challenge is, you know, since they got massive demand for the chips, all the ones they
hold on to, to do any kind of internal project, then, you know, hit their bottom line in terms of
undercutting current sales, you know, book kind of booked in margin. But what that means, and it's
one of the reasons why like Nvidia has been in a strong position because everyone says, well,
look, the thing that most matters is that I can continue to build out AI in strong ways.
So if you get China, you go, well, I suspect the price of how power transformers are going to
go up, but then they're going to be selling them broadly and probably to whoever meets the price,
which can include the US, you know, kind of as as a way of doing this. And this is actually one of
the orientations by which the investment of capital is one of the other things that keeps the US
in a substantive lead because if you think about, you know, you go to the $650 billion of investment
in a set of things, which have partial demonstrated revenue, but a whole bunch of uncertainties,
you go, well, which countries in the world can do that? And the answer is one, right, the US. None
of the other countries, including China, I mean, the government has that potential capability,
but the companies don't operate that way. They have much, they have much lower revenue streams,
they have much lower, you know, kind of ability to kind of invest in this. It's one of the reasons
why a lot of the AI innovations that are coming out of China relative to software tend to be
efficiency and tend to be using distillation of various models as a way of doing it because it's
like, actually, we have, we have a massive amount of talent, we have a massive amount of data,
and we have some compute, but we also have a lot less cure capital to just burn with an uncertain
turn into revenue. So, so I think that the, you know, I would say I'm it's worth paying attention
to. It could turn into a sudden, you know, terrible vulnerability. It's one of the things that is
of the many kind of nuttinesses around, you know, piss off our friends and allies as much as
we possibly can. You know, it's kind of the the strangeness of this. I think it's one factor
among many, not a, you know, five alarm fire. No, fair enough, something certainly that we need to
watch. And I think another thing when it comes to AI, people are talking about trust like AI has come
along at a time when trust in government and institutions in companies seems to be at an all-time
low. And so we've been talking about AI at the national level and let's, I want to take a moment
to talk about our government institutions. Longtime listeners will know that you launched a challenge
last year with lever for change called the trust in American institutions challenge. It was a
10 million dollar open call. And we were asking organizations to submit and tell us what they were
doing to rebuild trust in institutions in the United States, whether this is the criminal justice
system, the education system, our national media, our local media, all of these things are
critically important. And we're not going to have a functioning society if our citizenry doesn't
trust these institutions, but also these institutions aren't, these institutions aren't responsible
back to, to citizens. And so, months ago, we announced the five finalists for the trust in American
institutions challenge. And it was a 10 million dollar again open call for these bold ideas to rebuild
and scale public trust. And the five finalists were the American Journalism Project, Cal Matters,
Residivis, Results for America, and Transcend. The great news is that yesterday, lever for change
announced a winner. Cal Matters. Cal Matters is a nonprofit, nonpartisan news organization,
and it's focused on transparency in government. And right now, they're focused on California
politics and public policy with an eye towards expansion all around the United States. So, I
loved hearing about what Cal Matters did, and especially what they are planning to do with the
integration of AI. I think when we look at government right now, AI is actually super. AI is
something that can really help it analyze the enormous tropes of data that we have. We have
building codes with, you know, 10,000 pages of things that people need to do. We have congressional
votes over years and years and years. Everything we have around government around data, like AI
can be enormous force multiplier in terms of understanding what's really going on and actually
providing solutions for our citizens. And so, Reed, I would ask you, you were a part of this process.
You were really excited about all of the organizations that submitted and the five finalists.
What excites you about Cal Matters? And as well as the role of sort of this challenge in helping
rebuild and scale public trust, especially with AI. Start from the very top. One of the things
that's interesting about this is, you know, I've been helping the lever for change from its very
beginning and spin out of MacArthur because they have a really interesting model of using networks
to create, you know, highly validated and leveraged philanthropic dollars. And so, you know, having,
you know, their hundred and change, which is the thing they launched and then creating an old
platform and spinning out. And so, see a Conrad doing an amazing job of this and, you know,
having some folks doing that. And so, as you know, we've been talking to them for years about
what kind of projects to do. And the reason what we started with, you know, kind of trust in
institutions is because, you know, the thing that probably is most scary and disheartening about
our current moment in many Western democracies and maybe other institutions, places, is a tendency
to say, burn all the institutions down. Like, they're not working for me, so burn them down.
And when you look at history, the burn them down leads to just terrible outcomes, whether it's the,
you know, kind of French revolution, you know, whether it's, you know, the cultural revolution in
China, you know, like each of these things and there's just, you know, dozens and dozens of them
lead to enormous suffering, setbacks in society, etc. Because the intelligent thing is say,
look, we really depend on institutions functioning to have society function. And by the way,
we need informational institutions to function to function as a democracy. And, you know, they get,
they've been 10 to be highly politicized and say, well, everything is political. Now, my
personal point of view is something like the economist that says, hey, this is, this is our,
we have an informed point of view. Here are some of our principles. And let us tell you what
on the thing of the informed point of view, you know, it's opposed to, you know, slogans of
fair and balanced, which means, you know, slanderous and unbalanced, you know, kind of equivalent.
And so it's like, you know, that, that, that, that trusted these kind of information things really
matters. And so that's why we said, this is what we will do in terms of trust and information.
And to be clear, we actually wasn't focused on only journalism, like it was like libraries
and a bunch of other things because, you know, rebuilding institutions is the thing that we
most need in society. Now, what works really well in the level of change is that they,
they go out and get a whole bunch of different institutions aware, you know, non-profits and
organizations aware of the challenge is, you know, you know, even in inventive individuals,
people can submit widely divergent proposals, something that's far beyond the vast majority of
philanthropist philanthropist capabilities, including my own. And then they bring in networks
of experts and networks of people through it to an order to evaluate and say, you know, what's
the probability of this? Now, one of my delights at, well, at kind of, you know, kind of watching
at arm's length from all of this because part of it is to have it as an independently driven
organization and doing all that was that it was all of the, the, the finalists were amazing.
And CalMatters happens to be one of the finalists that I'd actually already been a donor to over time.
And so when, when, when came back with, oh, this is, this is what we think is the top pick,
was like, well, that was kind of cool because, you know, I had made attention to them,
because I always tend to have this point of view of having some responsibility to the communities
that that that have enabled me, the communities that I participated in. So, for example,
not just Silicon Valley with Second Office Food Bank, but also California. CalMatters was part of that
because, you know, one of my big frustrations is people go build important things in California
and they go, I've got this frustration in California. And by the way, you might have a very
legitimate frustration with California. There's all kinds of nuttiness with a prop tax, you know,
proposition, the wealth tax and everything else that's kind of going on. And there's genuine,
but like this is also the place that enabled you to do these amazingly scale magical things. And so
you have, you should also have some sense of participation, give back, loyalty, reinvesting,
the kind of the seed corn that allowed the flourishing of the the own crops that you made.
And so, the fact that CalMatters was part of this was awesome. And of course,
part of the thing that's really important to having a functioning democracy is to have
access to good information, like good information about how well is legislator working? What are
the policies that are working? What are the things that really matter for citizens? Are they,
is the budget stuff actually working out? Is this is this lying or truth? You know, has there been,
you know, good, the results that are claimed? Is it working or not for the people the right way?
And CalMatters basically says, we're going to do it as kind of the equivalent of a like our only
real point of view is understanding, you know, what are the things that actually are working
on not working programs? What are, you know, kind of like, you know, when, when various politicians
are making claims about things, which of those things are accurate when propositions are making
claims of things, which of those things are accurate? And to facilitate so that people say, okay,
this is a sort of thing where you're just trying to make sure I have the information as a California
citizen. That's the California resident to inform what I'm doing. And of course, try to create that
as a basis to, to be an incentive system for politicians to operate the right way, for journalists
to be able to, to understand truth and write stories that help with that the right way that,
that, that then, you know, kind of citizens can go, okay, that's a perspective that is trustworthy,
not because there aren't just, there's anything in life sometimes because they did a lot of real work
to try to make it accurate to, to what they're representing. And so, um, it's, it's a delight
that they're the selected honoree and, um, you know, I couldn't be more happy for them and all the
finalists were amazing. Awesome. Reed, thank you so much. I will give one final pitch to our listeners.
If you are looking for an offer profit to get involved in, to donate to, these organizations have
been vetted, as Reed said, by, we brought in, you know, hundreds of experts to look at all these
organizations. So, once again, if you are excited to give back, if you care about trust in American
institutions, the American journalism project, Cal Matters, recidivists, results for America,
and transcend are all incredible, amazing organizations that are doing great work. Reed,
thank you so much for being here. A pleasure. Possible is produced by palette media.
It's hosted by R.A. Finger and me, Reed Hoffman. Our showrunner, Eshan Young,
Possible is produced by Tenacity Deelos, Katie Sanders, Spencer Strassmore,
Imozu, Trent Barbosa, and Tafadzwa Nima Rundwe. Special thanks to Syria,
Yalim and Chile, Sayyida Sabiava, Ian Alice, Greg Biotto,
Parth Patil, and Ben Rales.



