Loading...
Loading...

In 2016, diplomats reported a strange burst of sound — followed by months of debilitating symptoms. “Havana Syndrome” sparked questions and conspiracy theories across the web about a possible unseen weapon. Now, new reports from Norway describe a scientist experiencing similar effects after testing a microwave device. Host Nicky Woolf asks: if such technology exists, who owns it and what are they doing with it next?
Also on The Interface this week: At the landmark trial in LA, social media companies like WhatsApp, Facebook and YouTube are defending their platforms against the accusation that they are addictive to young people; we ask what the fallout from the trial will be and what legal clause, Section 230, that dates from 1934 really means in the social media age. Plus, host Karen Hao has spent the week rubbing shoulders with the great and the good from the AI companies at the AI Impact Summit in Delhi, but what really goes on the shadowy meeting rooms around the fringe?
The Interface is your weekly guide to the tech rewiring your week and our world. Hosted by journalists Thomas Germain, Karen Hao, and Nicky Woolf, each episode unpacks week-by-week the unfolding story of how technology is shaping all of our futures. No guests. No jargon. Just three sharp voices debating the tech stories that matter - whether they shook a government, broke the internet, or quietly tipped the balance of power.
New episodes drop every Thursday on BBC Sounds in the UK. Outside the UK, find us on BBC.com or wherever you get your podcasts, or watch the video version on YouTube (search “The Interface podcast”).
To get in touch with the team - email us at [email protected]
The Interface is a BBC Studios production.
Producer: Natalia Rodriguez Ford Executive Editor: Philip Sellars
This BBC Podcast is supported by ads outside the UK.
You don't need AI agents, which may sound weird coming from service now,
the leader in AI agents. The truth is AI agents need you. Sure, they'll process,
predict, even get worked on autonomously, but they don't dream, read a room, rally a team,
and they certainly don't have shower thoughts, pivotal hallway chats, or big ideas.
People do. And people, when given the best AI platform,
they're freed up to do the fulfilling work they want to do.
To see how service now puts AI to work for people, visit servicenow.com.
The best B2B marketing gets wasted on the wrong people,
so when you want to reach the right professionals, use LinkedIn ads.
LinkedIn has grown to a network of over 1 billion professionals,
including 130 million decision makers, and that's where it stands apart from other ad buys.
You can target your buyers by job title, industry, company, role,
seniority, skills, company revenue, so you can stop wasting budget on the wrong audience.
It's why LinkedIn ads generate the highest B2B return on ad spend of major ad networks.
Spend $250 on your first campaign on LinkedIn ads and get $250 credit for the next one.
Just go to LinkedIn.com slash broadcast. That's LinkedIn.com slash broadcast,
terms and conditions apply.
What they are ultimately trying to create is going to replace humans.
What it's doing is pop boiling the inside of your brain.
This could be a turning point in the history of social media and of the internet.
Welcome to the interface, the show that explores how tech is rewiring your week and your world.
I'm Karen Howe. I'm Thomas Germain and I'm Nikki Wolf.
This week on the interface, we will be discussing.
Does the US have access to a brain melting device?
Could one lawsuit change the future of social media?
And fear and loathing at the world's biggest AI summit.
So we actually have an update on last week's episode where we talked about data centers in the UK
and how it was potentially undermining clinicals in the UK and right after that story,
there was new reporting that came out from the Times that revealed that there's around 140
proposed AI data centers that have applied to the UK to connect to the grid and all of the energy,
all of the power, if you add L of that up, would be more than the power demand of the entire country.
Wait, so they're going to add more electricity than the whole country is using right now?
That's the plan, whether or not it gets approved, who's to say, but the UK is basically
a floating island of data centers now. So for this to happen, the UK has to double its electricity
output. Yes. Because that's never going to happen. That's going to start taking it.
That is the thing. We have to take this with a grain of salt because 100%,
there is probably going to be issues with getting that much power to these data centers.
Yeah, no kidding. Yeah. Well, I have an update on my story from last week.
So if you didn't listen, I did an experiment where I convinced chat GPT and Google Gemini
and the AI answers you get at the top of Google search that I am a world champion competitive hotdog
eater. The point being that these tools are being manipulated and this is happening on a massive
scale. The story wasn't about me making the AI say dumb things. It was about how easy they
are to trick. Interesting thing happened. Gismoto, where I used to work, wrote an article about
my article. They reached out to Google and Google said, yes, we had a misinformation event
was the quote where a reporter went and mess with our systems kind of downplaying. It's like,
oh, one guy did one thing and not a massive problem across the whole internet, which, you know,
I guess there are two ways to look at it. What is that even?
Whole internet is a misinformation event. Yeah. Well, I'll say it together.
And we should also mention we got some really, really interesting comments on the last week's
episode. A couple of questions that run to your story Tom, which I thought were really,
really interesting. Yeah, I saw this. This is great. You know, like every single one of your
comments, or if you want to reach out to us, we read every single one of these. There was this great
story where this person said that they were like metal detecting on a particular beach,
like looking for gold, the balloons, like old coins. Yeah, they said they didn't find anything,
but they were like making a video about it where they talked to some guy on the beach asking like,
are there gold coins here? And then later this person said they went and they asked AI about it,
and the AI referenced their video in this weird, like, self-referential loop you're posting on
the internet. And then the AI is spinning it out as though it's like, established truth. Yeah,
it's information eating itself. So a big part of the story we were talking about with
GenGBT was like these AI overviews is what Google calls it. Like, you know, when you get the AI
at the top of Google search results, somebody asked if people just look at the AI stuff in Google
and stop clicking on links, isn't that going to cause a problem for the websites that are
producing the information that the AIs are pulling from? It is a huge problem. There's been some
research that when Google's AI overviews show up, the traffic that Google sends to the other parts
of the internet can drop by as much as 70%. This is a great question. Definitely something we're
going to go in a lot more depth on in a future episode. So stay tuned.
Okay, so I want to jump in with the first story because I've been waiting for this development for
three and a half years now. So I'm a reporter who reports on conspiracy theories, right?
I was brought in for what at the time was considered a massive conspiracy theory,
which was called Havana Syndrome. Back in 2017, a whole bunch of people at the US Embassy in Havana
started getting mysteriously sick. Some people said it was an attack with a weapon. Some people
said it was just, you know, that they were essentially making it all up. I got brought in to debunk
the conspiracy theory of it being an attack with a weapon. And against all expectation,
I ended up concluding that it was an attack. And in the last couple of weeks, I've been proved
right on that. And it all centered around like everybody there said they heard this like weird
sound before they got sick, right? There was a weird buzzing noise and then the cognitive symptoms
would start. They were dizzy. They were nauseous. They were having trouble thinking
and nobody knew what this thing was. By the summer, there were 50 or 60 cases. By the time it
went public, suddenly there were thousands of cases. Wow. It was called Havana Syndrome quite
quickly by the press. The other name for it was the immaculate concussion because the symptoms were
sort of like NFL players get. It's a long-term brain injury. And these brain injuries were,
in some cases, showing up on scans. We were talking to neuroscientists. They were telling us
that. And at least the core number of cases we're experiencing was real. But at the time,
everybody treated it like it was a big conspiracy. People were laughing at that. People were like,
that's impossible. There are still a lot of people who believe that this is psychogenic.
The power of suggestion is causing people to have these symptoms.
To me, it seems perfectly likely that both of these things are true. Especially once it was
in the news, people were, you know, you're very, very susceptible to suggestion, loves things
there, all over the media. It seems pretty likely that it's both. Among the investigations,
the things we looked at, the one that was at first put forward was, if it was real, then it was
some kind of sonic energy device. We know sonic energy devices exist. There's this thing called
an L-rad, which is a long-range acoustic device. These are truck-mounted things that they
use for crowd control. One of them was deployed in Minneapolis in the past couple of months.
During the immigration raid protests? Yeah. The problem with that is they're massive. They are
the size of trucks, right? That quickly became clear that it wasn't likely to be the case,
because you can't really hide one of those. Also, it would have had to get through in the
embassy concrete walls and bulletproof glass and all that kind of stuff. What we finally landed on
was that it was likely, if it was real, some kind of microwave energy device. The way we landed on
this was we built one. My friend who's a physicist, we cannibalized a whole load of commercially
available parts. We focused a bunch of microwaves into a big dish. Like actual microwaves like
you'd have in your kitchen? Yeah, we cannibalized them out and pointed this thing at a microwave energy
detector over a distance. And it worked. I'm sorry. What do you mean it worked like someone started
having an immaculate reaction? Well, it's interesting you should ask that. The thing that happened
last week was that the story came out that a Norwegian government scientist had also built a
test device or the Havana syndrome style device. They set out to debunk it as well.
But this scientist or Norwegian version of Nicky, right? Except he pretty unwisely
pointed it at himself. And the following day came down with all of the symptoms of Havana
syndrome. He was having cognitive difficulties. He was having nausea. He was having dizziness.
I mean, this is a serious condition. And what that what it's doing is basically very slightly
power boiling the inside of your brain, which is horrifying, right? And it was being deployed
against American diplomats and CIA. So it's like it's like literally sticking
your head into a microwave. What was the government saying the whole time? It's interesting that
you had to do this in the first place. Like what was what was the argument going on here?
So each government department seems to have a different line on this. The the DOD,
which has some personnel who have also come down with this, is leaning more towards this is real.
The FBI is following the CIA's lead. The CIA has been the strongest saying this isn't real.
And nobody seems to be able to agree. So that means there has never been any official
US government confirmation of Havana syndrome, which means that the sufferers of which there's
so there's about 100 cases that have been confirmed by the DOD. Now being confirmed to have something
that the US government does not officially acknowledge exists puts them in a really really
unpleasant gray area. So these are people who are now unable to do their jobs there.
It's no joke like they are quite seriously and permanently kind of cognitively disabled.
That means that they are not getting their medical care covered in some cases by the state
department, by the CIA. There's a couple of lawsuits going on where they are fighting to have
all of that stuff that they really, truly deserve, which is devastating for them and their families.
And the other not kind of fact that it's having is that the state department and CIA is
struggling to fill overseas positions because in some cases, banities were affected. Children were
affected. People with families that do not want to take overseas postings quite reasonably.
Is there like a reason that you think the CIA denies this? Is it because they secretly have a
device? That is my hunch. Now at the beginning of this year, so about a month ago, there was an
announcement that the US had purchased a Havana Syndrome device and was testing it and had been
testing it for about the last year. That was that was the previous news story to break before
this Norwegian one. Didn't this happen like when the US invaded Venezuela? Like wasn't there like
a security guard who said that this happened that it seemed like one of these devices was deployed?
Yeah. And I think it was Trump who said that there was a device called something like the
DiscoMobulator. The DiscoMobulator. That's a sick name. That's pretty good. It's pretty good.
That's like some 1940s comic book stuff. The thing that was interesting about the announcement of
the one that they've obtained is that they said that it fits into a backpack. That really
changes the game in terms of how easily it can be deployed in this kind of situation.
Now that people are increasingly realizing this is a thing, do you think that it could spread
to the point that it actually starts having a pretty significant effect on geopolitics?
I mean, it's already had a serious effect on geopolitics. This all happened after Cuba,
after Obama opened up relations with Cuba. This was then a really easy pretext for the Trump
administration to roll back all of the Obama era opening up. And so that immediately tanked both
the US-Cuba relations and the entire economy of Cuba, which has been in basically.
We focus on the part of the internet that most people don't know about. It's called the Dark Web.
Undercover in the furthest corners of the Dark Web. US Special Agents Corona Mission
to locate and rescue children from abuse. From the BBC World Service, World of Secrets,
the Darkest Web follows their shocking investigations. Listen on BBC.com or wherever you get to
a BBC podcast. The digital world feels more chaotic than ever. Huge data breaches, AI
threatening jobs, foreign meddling, that creeping feeling of obsolescence. It's information
overlooked. I'm Dina Tabourrest and host of Click Here from PRX and Recording Future News.
Wanna understand how we got here and how you can get ahead of it all?
Listen to Click Here. We can help you make sense of all the noise wherever you get your podcasts.
Yes.
Pretty full of episodes.
All right, so well-deserved victory lap for Nikki here. Switching gears, I want to talk about
over the past couple of weeks, there's been this trial going on in California. Social media is on
trial. You've heard this one before, but this case is different and I think really dramatic.
So the argument that is playing out here is whether or not social media apps are addictive
and whether the companies are making them addictive on purpose. It all centers around this one
particular case. It's in Los Angeles. There's a 20-year-old woman and she says that she joined
TikTok and Instagram and YouTube and Snapchat all these other apps when she was like 10 years old
and her use of these apps, according to her, caused body dysmorphia and all kinds of really
horrible mental health problems. She settled with TikTok and Snapchat before the trial even started,
but Meta and YouTube are fighting this in court arguing whether or not their platforms are addictive.
And what exactly, what methods are they saying that these companies used in order to
So the argument that the plan of is making here is that social media companies are operating
digital casinos is the term that they used here. So they're saying that features like
infinite scroll, how you can go on one example, you just scroll forever, it never stops,
they're saying the way that notifications are designed, the way that videos just keep playing
and playing automatically. What's really interesting here though is for the whole history of the
internet, there's been this law called section 230 or at least the modern internet, the past 30
years or so. Section 230, maybe you've heard about it before, if you're cursed like I am with paying
too much attention to computers, it's a lot that basically says big online platforms are not responsible
for the things that their users post essentially. They have to do their due diligence to make sure
there isn't like illegal horrible stuff happening, but aside from that, essentially it's like well,
our users posted that that's not our fault. We don't like it, we're not happy about it, but we
can't be held legally responsible. This case is different because they're arguing it's not the
content that causes the harm like yes, there's harmful content on here, but they're saying it wouldn't
be as much of an issue if they didn't design these tools to be addictive. Now that's something that
NETA and YouTube and all these companies completely disagree with the rate, they're saying like
no, we're not, our platforms aren't addictive. They basically argued that you can't get addicted
to these things. It's not like alcohol or cigarettes or something like that. YouTube actually
argued in their opening statement that they're not a social media platform. They say we're
entertainment, like we're like HBO, right? You can't get addicted to HBO. It's an argument that
algorithmic social media the way TikTok and Instagram now run, but it's sort of by definition
addictive. And there's, you know, you could look at this multiple ways, right? Like what the
companies will tell you is we're just trying to serve you content and videos and posts that you're
going to love. And you have fun looking at our platforms and then you do it for longer, right?
And you could look at that and be like, yeah, that's the reasonable argument, I think. The other
way to look at it is they are designing these algorithms in general. They're optimizing for
engagement is what they call it, right? They're trying to build them to keep you staring at your
phone, staring at your computer. They use notifications to bring you back throughout the history of
social media. They brought in psychologists to help them design the way that their platforms work
in order to, you know, latch on to like the inner machinations of the human mind. So I think
that's a great point. Like that, that's kind of what they're trying to do is to build them in a
way that keeps you looking at it, which is why they're, you know, they use this casino analogy.
That's why they're saying all these social media platforms are a digital casino because
we all kind of accept you can become addicted to gambling, right? There's something about
that process here that is different from other sorts of things. They're saying that social media is
more like that. You go on, you scroll, you don't know what you're going to get, you see the next
video you get this dope and mean rush, but it's not satisfying enough so you keep going,
that's the argument they're laying out here. I would imagine that most of our listeners and
viewers would be like, well, obviously social beauty is addictive. And this is one of those cases
where the law is trying to catch up to something that people already feel is true in our lives.
And it's because of section 230, as you said, that we have not been able to close the gap between
the perception and people's lived experience and what we can actually say about these companies in
like a legal sense. So if I'm understanding you correctly, Tom, you're saying that like they're
trying to make an argument that basically would not change section 230, they're actually
not at all around section 230. This case is finding another way in. They're saying, forget about
section 230, the problem isn't the content. People are always going to post harmful content
on social media. They're saying the problem is that these platforms get you hooked and it's the
design of the platforms. That's the problem. And there is a lot of evidence backing up this argument.
There have been all these, you know, document leaks over the years that shows that meta in particular
is well aware of the problems that users experience on its platform, right? There was a famous case
years ago called the Facebook papers where this employee named Francis Hogg and leaked tens of
thousands of pages of internal conversations where they knew, for example, that Instagram was
causing these like spirals among teenage girls where they would like end up having really serious
body dysmorphia issues and in eating disorders. And they knew meta did that there were things they
could do to prevent this and they chose not to do it because they didn't want to harm engagement.
The stakes of this are incredibly high because presumably if the court finds against
the big social media companies, that leaves them open to the mother of all class action
lawsuits, right? Because this is affected everyone who's used their platform.
There are more than 2,000 similar lawsuits going through the courts right now at like different
stages of the process. And this case in Los Angeles is kind of seen as the bellweather, right,
that depending on how this goes, this is probably going to set a precedent for how all those other
cases will go. If, you know, the courts find that these platforms are legally addictive, that will
open the floodgates for thousands of more lawsuits. And probably it would create a like new opportunity
for lawmakers where they would say like, okay, we've all decided that these platforms are addictive.
Now we're going to do something about it. But more broadly, I think this is part of a much bigger
shift, right, for years and years like more than 10 years we've been talking about like, you know,
for all these companies are causing so many problems. It's really reached a breaking point, right?
Australia passed a law that says like social media essentially is illegal for young teenagers,
right, that you can't get on these platforms. There's a similar law proposed in California.
It's part of this broader push to try and bring these companies to heal.
One thing that interests me about this case when it comes to AI is whether or not this case will
then define the way that the courts or the public or lawmakers start looking at AI companies and
whether or not they're addictive as well. And one of the things that I've been reporting on recently
is the fact that AI companies have really been trying to effectively get a version of section
234, like section 230 protected social media companies from liability for so long. And now AI
companies are trying to also get a law that makes them completely unaccountable to any of the harms
that they produce. And if this case actually goes the way of the users, could that create a
cascading effect where then it undermines the campaign of the AI companies as well?
Absolutely. It's a really interesting moment because there's been this big shift because of AI,
just in the way that tech companies are operating, right? So for the longest time, like all of the
biggest digital, like online companies in the world, we're just full of stuff that their users
post, right? AI is different, right? Because when you talk to Chanchee BT, you're not encountering
user-generated content. Like the company itself is speaking to you. So if the company, if the
company's tool creates a piece of information that hurts you, this law section 230 does not
protect them. Yeah, that's right. So we're pointing out section 230 isn't just like a bad thing.
It is what allowed the internet to flourish, right? Like if social media companies were like
directly responsible for every single thing that their users post the minute it goes online,
you wouldn't be able to have something that looks like Instagram looks the way that it does today
or looks like even Google search, right? So there's also a lot of freedom. So it's a really
hotly debated, complicated issue. Also really interesting to look at how the companies are
responding to this. It's another moment where like it seems like Mark Zuckerberg in particular
gets pulled in front of the public and forced to answer a bunch of questions. But kind of famously,
like Zuckerberg has had a lot of really weird flubs in situations like this before because he just
is sort of a weird guy. He's a weird dude. His people, his team were like literally trying to make
him act more human was like how it was described. And he said when they asked about it, he said like,
well, yeah, I think famously I am pretty bad at this sort of thing that we're doing right here.
That's quite self-aware of him. You have to give him credit for the self-awareness.
The human training is working apparently. But also a few years back, he said that he'd made a 20-year
mistake, a political miscalculation, which was essentially apologizing too much. And he said that
his company and he had historically taken responsibility for things that weren't actually his
responsibility, right? That like, oh, people are criticizing our platform. Well, it's not really our
fault. And that's kind of the argument that they're making in court here. They're like, we know people
are getting hurt. We're very unhappy about that. We don't like it. But they're getting hurt because
of human nature. And one thing you're not hearing really from these companies in these cases is
sorry. And we really haven't found any actual way to hold these companies accountable for the things
that happen when their users are engaging with the platforms. So I think it isn't hyperbolic to say
that this could be a turning point in the history of the internet. This one little lawsuit,
depending on how it plays out, we could see a much different technology landscape and a much
different internet over the next few years, depending on how the court rules. Yeah, that would be
huge. That would be really, really huge. So speaking of crazy CEOs, I had the craziest week last
week, attending the AI Impact Summit in India, which was this massive international event where
more than 500,000 people descended into New Delhi to gather and talk about all things AI. And
this event brought together some of the biggest big wigs in the AI industry, as well as some
world leaders, including, you know, French president Macron and the Brazilian president Lula.
And it was just a spectacle, a spectacle. Like it was this massive circus of scale and size.
And so overwhelming in so many ways. But there were just all of these hilarious things that were
happening, like Macron showed up and would not stop using the phrase jaiho at every opportunity.
I would jaiho from slumdog millionaire. Oh my god. But Macron like literally used it in a speech.
He tweeted it the moment he arrived at India. He posted it on Instagram. You like made a video with
the soundtrack in the back. Anyway, there was this very viral moment where Sam Altman and Dario Amade,
CEO of Anthropic, refused to hold hands after Modi tried to make all the tech leaders hold hands in
this giant celebratory line to say like, hooray, we're all united in our goals at the summit.
And for listeners that have been with us since the beginning of this podcast, they will know
that Altman and Amade have deep beef with each other. And it was displayed to the entire summit
into the entire world that everyone was holding hands except for two of them.
It's so petty. It was so so silly. But what was so interesting about this summit is that
there are kind of two summits actually happening simultaneously. There's the public facing summit
where you have all these talks given by the CEOs. Then you have a bunch of panels. I was part of
one of the panels. So I was I was attending because I was speaking. But then there's a secret summit
that's happening behind the scenes. And this is the real reason why all the CEOs of these AI
companies show up. They're all trying to have these back room negotiations directly with
governments, with world leaders to essentially codify their ability to operate above the law.
So in the public facing summit, there's like civil society, there's university students,
there's academics, there's lots and lots of different types of people from different walks of
life that are representing very different perspectives. But in the secret summit, it's just the
government face-to-face with the companies. There's no one else invited. They decide what norms they
want to set for how a company is allowed to operate in a certain region. And literally no one
else can participate. And so what came out of these closed-door discussions this time around,
they announced over 250 billion dollars of data center investments. So it kind of set this tone
of like, we are here to do business, we are coming to the global south to open up markets and
collect more data and build more infrastructure. But there was also this five shifts that happened
that kind of manifested in two ways. One was that this summit actually opened up the public
facing summit to the public themselves. Usually that doesn't happen. Like the public facing summit
means simply that it's live streamed to people's living rooms if they want to watch it. But in this case,
anyone and their mother was able to come and show up and just like be in the audience and ask
questions. What kind of questions were the audience asking? Some of them just went on full
rants about like American imperialism, which was hilarious. Others were asking questions like,
how should I be preparing for an AI future? Or how do I protect my kids' critical thinking?
How do we make sure that our governments don't invite these companies to keep building data centers
in our communities? It was like such an interesting array of questions that represent a cross-section
of societal concerns. But the other thing that happened that I think represents this vibe shift
is that the CEOs during their keynotes and during their public interviews were just saying that
wildest things. Because I think they have the pressure, the public pressure,
and the public criticism against the AI industry has now reached a point where they are really
on the defensive and they feel like they have to justify far more why they are consuming so many
resources, why they are collecting so much data. Yeah, there was this quote from Sam
Altman that was kind of not well received. Yeah, I mean, you can see why. I feel like we need
to read the full Sam Altman quote because it's you cannot make this up. So he says one of the things
that's always unfair in this comparison is people talk about how much energy it takes to train an
AI model relative to how much it costs a human to do one inference query, which I already have
so many thoughts. I love doing that stuff. You know me. I'm always doing my
inference queries. But it also takes a lot of energy to train a human. It takes 20 years of life
and all of the food you eat during that time before you get smart. And not only that,
it took like the very widespread evolution of the hundred billion people that have ever lived
and learned not to get eaten by predators and learned how to like figure out science and whatever
to produce you. It's very clear that Sam Altman does not know what a human is.
Part of me wants to be empathetic to him here. And he just had a baby. So I wonder if he was
actually thinking about that. Like he's like a newly met father. This thing is kind of like an AI.
Yeah. It's like observing his child being like, wow, this takes so much effort to raise a human.
And like maybe that that's like how that you know, like the most charitable interpretation.
Well, there's also like the obvious argument that it's like the human beings are here and they're
living lives and like falling in love. The AI is optional. We don't have to do that part, but they're
kind of operating from this perspective. But it's like, well, we must build this technology
and it is inevitable. So yeah, of course, it's going to be really bad for the environment.
Yeah, I don't know if people are buying it. It comes down to this kind of post human philosophy
that you get a lot in the AI industry that they're not, they don't think of themselves as building
a tool fundamentally for humans. Yes. They think of themselves as building the successor to
human. Exactly. Something like God, which to be fair is like a small faction of the people.
It's not like all of them, but it is a growing faction that have this ideology that what they
are ultimately trying to create is going to be duplicative of and replace humans. Yeah.
But whether or not it replaces them, like the idea that we're making a god that's going to solve
all our problems, like that is open AI's mission, right? Like a couple years ago, there was a great
profile of Sam Altman in the New York Times where he said like the plan here is that we're going to
make this tool that is smarter than any guy. We're like making a new guy and that guy's going to take
all the jobs. Open AI will accumulate all of the world's wealth and then we will redistribute it
to the people. Like that literally that is what Sam Altman says his plan is for his company.
And like we've been tuning our whole world around this plan, right? Or at least like the world
governments are and like all the biggest companies in the world. Like this is the new thing that
everyone is doing. This is the future. Here's what it's going to be like. And now over the last
week at this conference, you were at it kind of seems like maybe they're wobbling a little bit.
But then at the same time and all these backdoor secret meetings that you're talking about,
they're making new plans for all these data centers. They're going to build. So whatever the
public feels like in some sense, they're working hard to just keep charging ahead.
There were moments when I feel like the CEOs really revealed what is usually left unsaid.
Like Brad Smith, vice president of Microsoft, he literally used the phrase,
we need governments to generate demand for our technology, which was like such a wild admission.
You know, like he's basically saying we're having trouble like getting people to want it.
So we need the government to force people to want it. And there were kind of many signals that
that kind of set this tone of yes, they are striking these deals. They still got the $250
billion of data centers. And yet they are under a lot of pressure. And public pressure is basically
working. Like they are really struggling to regain control of the narrative.
And as they lose control of the narrative, I think they know that this will start to stall their
ability to shape the world that they want. And every time someone like Sam Alman says something like,
oh, think about how much it takes to train a human, they're just kind of slices away and slices
away on the public goodwill. Which does matter, right? Because like the idea that this is going to
be great, it's going to change the world. If we all collectively stop believing that it could be a
pretty serious economic problem for these companies, because they need a lot. It's hard to overstate
how much money these companies need. And the more that the public isn't buying it, the harder
this argument is to sell to their investors. And the impact could be pretty dramatic.
Join us next week. If you're in the UK, you can listen on BBC sounds. If you're outside the UK,
you can listen wherever good podcasts are distributed or search for the interface podcast on YouTube.
And if you want to get in touch with us, you can email us at the interface at BBC.com. We do
read all of your messages. Or you can WhatsApp us on us 444-3207-2472 or find us on social media,
links are in the show notes.
You don't need AI agents, which may sound weird coming from service now, the leader in AI
agents. The truth is AI agents need you. Sure, they'll process, predict, even get worked
autonomously. But they don't dream, read a room, rally a team, and they certainly don't have
shower thoughts, pivotal hallway chats or big ideas. People do. And people, when given the best
AI platform, they're freed up to do the fulfilling work they want to do. To see how service now puts
AI to work for people, visit servicenow.com.



