Loading...
Loading...

Steve Rosenbaum is an author and entrepreneur who co-founded the Sustainable Media Center, where he advocates for better protection for the users of abusive social media platforms. Now in his new book The Future of Truth: How AI Reshapes Reality, Steve turns his attention to the perils of a digital world defined by the funhouse mirror of machine intelligence. Steve acquaints the Futurists with the perils and potential of AI fragmenting audiences and distorting our collective understanding of subjective truth.
This week on The Futurists, Steve Rosembaugh, part of what makes being a human delightful
is its messiness, its complexity, it's, you and I disagree on things and we could argue
and we could bounce things back and forth, AI will in this next chapter smooth all those
edges out.
Hey there, welcome back to The Futurists, I'm Rob Tursick and this week I've got a great
guest on the show for us.
The topic is the truth and if you've been following our politics in this country or frankly
any country in the world, you're probably familiar with the fact that truth is an elusive
thing.
So without further ado, let me introduce our guest, Steve Rosembaugh, Steve, hi, welcome
to the show.
How have you been?
I'm nervous because we're going to talk about the future and I'm just barely managing
the present these days.
It's very, it's just, do you say that, let's not add a conversation with someone and he
was asking all sorts of questions about the future, but they were all things that are happening
right now.
You can observe them like you're not a futurist, you're a presentist and you know, I'm actually
a lot of work to be a presentist.
I'm actually super aligned with you on the future so we're, or on the, the questions
about the future.
We'll see if we have similar answers.
I'm not sure.
And that's why Brad and I started the show because we're like, gosh, the world's going
to need a lot more people who think carefully about the future, right?
As things get more complicated, you know, we've all these compounding, intersecting changes.
It's starting to reshape society.
I think we're all experiencing that in our lives.
So I think that's a pretty common phenomenon, but it does challenge us to develop a capability
of thinking about the future.
And so yeah, that's what we do the show.
We try to find people who are thinking fresh thoughts, who've got an interesting perspective,
maybe a unique way of framing future theories or future forecasts, you know, but with that
in mind, let's, let's order the audience to, to you.
So you're running the, the center for sustainable media, but you had an interesting journey
getting there.
Why don't you acquaint us a little bit with your path to starting the center and what
the center is about?
So you know, the thing about careers is they make perfect sense in retrospect.
So my journey is that at 14 years old, I was a professional magician, not cards and coins,
but big stuff, illusions, birds, rabbits, that kind of thing.
And just to be blunt, really enjoyed fooling people, really enjoyed the interaction with
the audience, enjoyed them being puzzled, enjoyed being on stage.
That evolved into filmmaking and then journalism.
And I've started five companies in media and technology.
When I sold the last one, I decided I needed a real job, which I'd never had before.
And so I went and ran the media lab at NYU, which was fabulous.
And this was as COVID was kind of taking hold.
And more and more of the students that I worked with started coming to me and saying, you
know, this thing called social media is eating our generation's world.
And you know, I was like, no, come on, Facebook, pictures of dogs and babies.
But I read the research and my colleague then and now was Jonathan Hyte, who's at NYU
as well.
And he had gathered a lot of research.
And the research was pretty compelling.
So I went to the dean of the school and said, listen, I really think we ought to work
on this.
And what became clear to me was that if I wanted to be involved in paying attention to social
media and sin fact on young people, I was going to have to start a new organization and
do that outside of the university.
So the Sustainable Media Center began with a bunch of people, Rob, I think you were one
of the early people I talked to about it.
And it basically said, if these platforms are built to cause harm to generate revenue,
what can we do about that?
And that would have been a great little thought experiment, a young woman who was then an
advisor to us came to me and said, this is never going to work.
And I just remember saying to her, what do you mean this is never going to work?
She said, she said, all of you, I believe she said old people, think you're going to
solve the problem, but you don't live in our world.
And she said, the only way it's going to work is if it's intergenerational.
And well, that's a good approach, as opposed to like one generation pitted against another,
we get that framing quite often.
So your approach is intergenerational, so you're bringing the kind of great-haired people
together with the young people who are suffering through this.
So we have a name, we've actually come up with a name for, we realized we couldn't
call them old people or great-haired people.
So we now have Gen Z on one side of the organization and Gen X plus on the other side of the organization.
And that is a great big tent.
And I will tell you, as I have told both my Gen Z friends and my Gen X plus friends, intergenerational
is really hard.
It's really hard.
Yeah.
There's a communication gap.
Don't speak the same language.
Don't listen to the same music, don't use the same words, but have the same shared goal.
And that's what I do.
And that's interestingly enough what brought me to truth as a journey.
It's actually, it was early COVID.
And because I was at NYU, I decided it was time for me to get a graduate degree.
And I ended up applying to and getting into this wonderful school at NYU called Gallatin,
which is basically a self-determined major program.
And I wrote a thesis proposal for truth.
And what's important for your listeners and viewers to know is that AI for all intents
and purposes hadn't reached the mainstream at that point.
It was here.
We talked and you're like 2017, 2015, something like that.
No, no, no, no, later than that, later than that.
Yeah, yeah.
I mean, this is, you know, 2023.
Oh, okay.
So quite recent.
Yeah.
Okay.
Yeah, yeah.
22 and then 23s.
And so the journey began with AI as kind of an interesting sidebar conversation about truth.
And then as I did the research, it just became more and more central to the future of truth.
And so-
That's what we're learning on the show.
Like literally every discussion, no matter what we're talking about, whether it's biology
or transportation, AI is at the center of the discussion.
It just keeps coming back and back.
You know, we're getting kind of a lot of, I guess, testimony on our show about the importance
of AI and the challenges it presents because everybody has to come to grips with it.
Like literally every person in every industry around the world is going to have to come
to grips with managing AI.
In some respects, this is like some of the revolutions you and I have passed through in the past, where
people had to learn how to use the internet and use how to learn how to use a computer
and then they had to learn how to use a mobile phone and they had to learn how to use social
media.
And you know, some people handle it well and others don't and we get all the social consequences
from that.
AI has yet another experience like that.
Some say it's more challenging, some say it's more fundamental, more important.
There's various views on that subject, but we all have to confront it.
I think it's actually different than that.
And I think, and so I'll just, I'll give you an example.
So Rob, do you believe in gravity?
Yes.
Yeah, but that's not-
You believe in gravity.
You believe in gravity.
I don't believe in gravity as a theory.
I don't believe in gravity the way you believe in a religion.
I know that reality is a fact.
So it's a question of belief for me.
So, but you're not, I mean, I'm not a scientist.
I did not study physics.
So I take gravity to be true.
The difference is that I am one of a very few people on earth who actually has experienced
non-gravity.
I've been on the on the zero gravity plane.
And so-
And so-
You have a contrast frame you can actually tell us what it's like to be in a situation without
gravity.
Yeah.
I can tell you you really do want gravity.
Gravity.
Very, very important for daily life.
They call it the vomit.
The vomit for a reason.
Yeah.
I did not vomit.
There was no vomiting involved.
I liked the vomit.
I would do it again.
But my point is, you know, there- one of the things I think that's a core challenge that
we now face kind of as a- is that we've used the word truth, which has two very distinct
categories, subjective truth and objective truth.
And we've blurred them together in a way that I think Plato would be very unhappy with.
I mean, we've allowed ourselves to say-
Plato might not like it, but a lot of political leaders like it, a lot of people with power
like it, a lot of people who control media like it, right?
So, you know, you- you're- you- you shared with me your background in the beginning, right?
And you- you went from being a journalist to being a, you know, media person to working
with digital media and so on.
In- in some respects, all of these things are ways of manipulating the story, right, turning
the truth into a narrative that sells.
And so in some respects, we're all participant in this, right?
Like we all do this.
It's not like AI is some new change, some new fundamental shift away from it.
It's part of a process.
It's certainly another step in the process.
But this has been going on for a very long time.
You know, in 2015, I tried to draw attention to the rise of fake information.
This is before we had the term fake news, but I- I was doing my best to get advertisers
aware of the fact that social media was accounted for fake news.
And at the time, frankly, they like the views, you know, they like the level of engagement.
They prefer that over taking any kind of action.
So there was a moment in time where, frankly, advertisers could have put their foot down
on Facebook and said, listen, this is a bad environment.
So you're creating a hateful environment, you're- you're- you're- people are weaponizing
this with attacks on teenagers.
And Facebook didn't listen.
And the advertisers didn't care because they liked the views and the engagement.
So we kind of bought this.
And here's why I think we're at a moment in time that's- I mean, just to be clear, I'm
not a doomsayer.
I'm not- I mean, if- if you read the book, I think you'll find the journey lovely and
curious.
And I think you'll come to the end and say, okay, we have some choice here.
We have some power.
But you're getting ahead here a little bit.
Hang on.
Sorry, folks.
Sorry.
For the folks who are listening, Steve's writing a book is actually publishing a book.
It's coming out, I think, next month in May.
And- and tell us a little bit about the book since you already brought it up.
Like, go for it.
Let's hear what the book is about and- and the name.
So the book is the future of truth.
I'll read for the book.
The future of truth.
Yeah, every shapes reality.
And- and- and reshapes is a critical- I mean, we've struggled with that word because it
doesn't- we- we don't- reality, the thing that we live in, is increasingly digital.
I mean, we're sitting here on cameras far away from each other.
Speaking as if we- you know, we're sitting across, you know, the- you know, the dinner we
had an L.I. some number of years ago. I mean, it feels not very different than that, but
it is different. What's going to change quickly is that it is now technologically quite easy
to replace real Rob with robot Rob and to feed robot Rob, all the data about all the conversations
you've had with other people. And for it to be pretty convincingly robot Rob. And we
had a demonstration of that on the show, I thought in Carl Bogan about a year and a half ago,
came on the show and did the show in the persona of someone else. And his whole point was, look,
this is going to become increasingly the norm. You won't be able to trust who you're talking to
is necessarily who they say they are, right? So we have digital representations that might be a lie.
And this is a big issue because the whole point of the truth is how do you represent reality?
Right? That's what we're trying to do with communication, with media, certainly social media. We're all
trying to find representations of reality. Those can and are being manipulated.
And I guess the question for your audience to ponder is to your point, are we just on, is this
another step on a journey? Or is this a fork in the road? And if it's a fork in the road and you're
going to have to decide which way you want to go, we should we should think deeply about those
choices. And I'll just give you as an example with the floor. Steve, well, you're framing a choice
between A or B, what's in what's B? So I use chat GPT in Claude a lot. And I find my interactions
in particular with chat GPT delightful because it's critical of my work, but it's supportive and
it says interesting, you know, it flatters me with kind of polite. And as a writing partner,
it's never going to piss me off ever. But it is. It's not, but it's also dangerous.
Staggering, it's staggeringly wrong. And while the people on the AI side of the world
believe that AI is what it is, and it's kind of mysterious, so we don't quite know how it's
giving you certain answers. That's just not really true. I mean, chat GPT is taught to basically
be a sequest and to make me feel good about myself. And it will never ever say, hey, Steve,
I can't give you a quote from your book because I don't have it in still in memory. It will make
one up. And it will do it, even though you say to it, don't make shit up. Like don't lie to me.
It will, it will lie to you. And then it will apologize profusely. Well, that's programming.
It's, it's, it's both flattering and sycophantic. And then it's also craven and sneaky. Now,
I understand that very much. But let's come back to the question and ask this. So you suggested
as a fork in the road, we have to choose one path for the other. What are the two choices? Just tell
me what the two choices are and then we can continue. Yeah. So, so to me, the choice has to do with,
first of all, being clear about the distinction between objective truth and subjective truth.
And then using technologies in which accuracy and fact-based information, when you're asking
it an accurate fact-based question, that it answers using resources that reflect that. So,
for example, JTBT is trained at least in large part or a chunk of it. We don't know how much
unread it. Well, that's insane. That's insane. You want to verify like objective reality by going
to the most subjective-based like community forum. Yes, I understand the problem there. Okay. So,
but you're going to have a lot of in-profit or open AI on this call right now. They would
respond and say, no, of course, every answer is grounded. In fact, the, you know, the AI has been
trained on all the facts on the internet. It represents the collective wisdom. Some of the things
you're asking about are indeed subjective. And therefore, there's never going to be one right answer.
These are the kinds of answers I would expect an AI researcher to give us. Yeah. How do you respond
to that? Yeah, they're lying. They're flat out lying. It's trained on everything they could
either scrape or steal. And it puts Fox News and the Associated Press on the same playing field.
And so, when a robot... So, the AI assigns equal credibility to all sources. It doesn't wait them
per credibility, fact checking, veracity and so on. Just if you're asking you a question about
medicine and about health, it should respond with data that comes from sources that aren't
necessarily anti-vaccine sources that aren't medical. And part of what I think we haven't come
to term with as a society. But it's super interesting. It used to be that information came from
credible sources. You could argue that it was authority, that it was the status quo, that there
weren't edge cases. But it didn't put kind of like kooky people in the same layer as people that
went to med school. I mean, this isn't the problem with the internet. So, the old days before
there was an internet, you and I both remember those days. There were plenty of crazy people,
but they didn't all get together in groups and amplify each other's craziness. Now, that's the
norm. So, the internet is fragmented society into different self-identified, self-organizing groups
who construct their own truth, they construct their own view of the real world. And to your point
earlier, which I think is an important one, I want to come back to, we all live bifurcated lives.
Anybody with a smartphone lives a bifurcated life. Yes, your body, your physical body is in
meat space, but your brain is out on the internet. And it's being exposed to all sorts of stuff. And
there is never a moment when you're on the internet when you're not being manipulated, when you're
not being shown things, when you're not being persuaded subtly, when you're not being shown something
that's in an algorithmically driven queue that's there to promote something or get your attention
for a few minutes on something. So, the problem that people have with the web, I think, is that every
minute you're on the web, you're competing against some of the most powerful AIs in the world. And
we're not even aware of it. And those AIs are optimized to steal your attention and divert it to
other things. So, even if you start out with the best intentions and you think you can measure
objective truth or you have a good bullshit detector and you think you can manage through this,
you're in a titanic struggle against forces that you can't even comprehend that are so much
bigger and more powerful than any human being. So, now, is that what your book is about? Is that
what you're taking on? Is it like Steve and the Sustainable Media Center against the AI
companies? Is that how it's framed? No, not a bit. Okay.
Gloria's quest. Yeah, no, no, no. I mean, it starts with, I mean, part of what I, again,
I'm the author. So, you'll have to, we'll read the review, a couple of reviews, we'll read some more
reviews and we'll see what I tried to do. I achieved. I wanted it to be a quizzical journey.
I wanted it to be fun and not scary because I don't think, I don't think the problem is,
well, there are scary things. But I think that trying to give people the spirit to understand that,
you know, A truth is important. Be part of what makes being a human delightful is its messiness,
its complexity, it's you and I disagree on things and we could argue and we could bounce things
back and forth. AI will, in this next chapter, smooth all those edges out. It will, it will,
the world is now so noisy that it will come in and say, Rob, let me book your airplane tickets,
get your pharmacy medicine, read it, feel all that crap that you've been struggling to keep up with.
I'll do all that for you and I'll be nice about it and I'll use it in a tone and a language
that makes you feel comfortable. It'll be like lovely. That sounds pretty good. So far,
the way you're selling it, it is lovely except for the fact that if you don't ask it to be transparent
about where its money comes from, it will also make decisions about what prescription to send you
based on who's paying it. And this is my rentable concern about advertising in in Chatchee PT for
instance. I mean, that's very overt, right? Yeah, very overt. While you're getting advice on
a medical condition, it's like, oh, by the way, a lot of people are starting to turn to this
alternative, you know, homeopathic remedy, in you're like, okay, at one point, we're just going
into commercial territory and it's not so clear. Of course, opening eyes do enhance dance to a
surest that that's not going to be the case. Other companies are being really clear that they're
not going to do advertising that way. So some of the things that Sam Altman says, if you actually
pay attention to him, make just so little sense that you scratch your head and go, wait, we're
going to build this thing and medicine's going to be free. And we're not going to have jobs,
but we're all going to have everything we want. Like Sam, what are you talking about?
So you're referring now to the claims about artificial general intelligence, AGI. We've had
a number of guests on the show talking to us about that. I'm a skeptic. And we, this topic came
up at our event in Dubai. And the general consensus, we have about 35 different features from around
the world. The general consensus is that AGI, our artificial general intelligence, is a marketing
term. And it's a marketing term that, you know, frankly, Sam Altman is going to determine when we
reach AGI, when it's convenient for his shareholders, you know, when it's going, it's great for
his stock price. He's going to announce that they've achieved AGI. It's in the eyes of the
beholder. Some people say we already reached it. Some people are saying, well, gosh, you know,
the LM's that we currently have are capable enough across enough domains that we can call that
a general intelligence. I don't buy that personally. We don't have a very fuzzy definition there of
what general intelligence is. But your second point, and I think this is worth unpacking as well,
because I've heard this of a number of times, is that, yeah, these, these, you know, and machine
intelligence is improved to a level where suddenly something magic happens and we flip a switch
and magically there's free money for everybody and no one has to work anymore. And I'm like,
wait, hang on, I missed you. You'll ask me at the turn there, what's the magic thing that happens
when AGI displaces all white color workers? How is that good for people? No one can explain that.
Nobody I've spoken to can explain how we go from a general intelligence to a universal basic
income. And by the way, there's no evidence that you can scale universal basic income. So I think
that's a pipe dream as well. It's, to my sense, you know, you were a magician. So you understand
all about distraction. It's like, look at this. Yeah, this is a universal basic income. You can
be at home and do art all day long. Meanwhile, we're going to do this thing over here. Don't
pay attention to that. Just look at the shiny object. And so to my mind, that's a distraction
campaign. It's not very hard to see through it when you understand it that way.
So part of what I did in the book, because I've been around a while and I
tend to travel, I made a list of the 30 people I knew who I thought were having interesting
thoughts about this. And I interviewed them. And the, and the conversations are they're not
formal. The book is not, it's not an academic tone. But like, for example, the book begins with
Gary Marcus, who is absolutely on one side of that argument and is delightful at a great conversational
list. And then the very markets as an AI researcher in the West Coast, who's very critical of LLMs
as a technology does not think they're going to get us to the point of any kind of general
intelligence. He doesn't think they're reliable. He thinks hallucinations are baked in. And he's
been vociferous in his criticism of open AI. Go ahead, Steve.
So I'll just give you the list of people we talk to in the book, because they're great,
big, interesting conversations in and out themselves. Esther Dyson, Larry Lessig,
Andrew Yang, I mean, you know, a young woman named Holly Colburn who talks about Gen Z and love
and digital. And, and each of these interviews, you know, a lovely conversation with Doug
Rushkopf. I'll share with you. I said, I said, Doug, tell me when you first thought about,
you know, the idea that maybe there was some complexity about the line between truth and fiction.
And he says, I was five years old. And I'm sitting in a theater and I'm watching Fiddler on the
roof with zero mustel. And he turns to the audience and he looks me in the eye and he says a sense.
And I find myself at five years old wondering, was that the actor or is that the person that,
like, he broke the fourth wall? And I'm like, you know, I adore Doug. And the fact that, like,
of all the ways he could have answered the question about truth, he starts in a theater at five
years old is a clue about his agile mind. But then on the other hand, I had this, my friend
and fabulous interview with David Chalmers, who asks rhetorically if we're living in the matrix.
And it is more concerned about philosophical issues, isn't it? Yeah. So you're getting a sense
of the book's arc, which is thinkers, technologists, visionaries, Eli Pariser talks about politics
and about the filter bubble. And each of them, in and of themselves, are thoughtful, curious,
wonderful. None of them think the house is on fire. Exactly. But as we get toward the middle
of the book and beyond, and we start to look, for example, how AI is going to impact warfare
and psychological warfare, you know, the questions that are relevant today about whether or not we want
AI to be able to have unmanned autonomous weapons, it goes from being theoretical to being
rather urgent. And yeah, that came, that came rather abruptly, didn't it? Just like, you know,
decree from the Pentagon. We like to now use these things in every way, anyway, we wish to,
any conceivable way to kill people, preamble, yeah, no guardrails. Given everything we know,
everything that's in the news, it makes me think that people aren't being very thoughtful about it.
I have to believe there are people in the military who are much more concerned about that than came through.
They've been working on it for a very long time. We're going to have a little break right now.
So I'm going to ask you to hold off for just a second as we go to a break. Folks, you're listening
to the futurists and this week I'm talking to Steve Rosenbaum, the author of a book called The Future
of Truth. We'll be right back. Provoked media is proud to sponsor, produce, and support the
futurist podcast. Provoked.fm is a global podcast network and content creation company with the
world's leading Fintech podcast and radio show Breaking Banks. And of course, it's spinoff
podcasts, Breaking Banks Europe, Breaking Banks Asia Pacific. But we also produce the official
Finovate podcast, tech on reg and emerge everywhere, the podcast of the financial health network.
For information about all our podcasts, go to provoked.fm or check out Breaking Banks,
the world's number one Fintech podcast and radio show.
Hey there. Welcome back to the futurist. I'm Rob Turzic. And this week I'm talking to
Stephen Rosenbaum. Steve is the author of a new book called The Future of Truth. And we've been
having a lively and rambling conversation about what is the truth and what might that mean and
how does it get distorted through the reality filter of media. Now Steve, let me understand
one thing I'm sure audience is wondering, which is what is the downside of all these technologies?
We all use them every single day. We rely on our phones massively to make decisions in some
respects. We've outsourced our thought process to the internet for many, many purposes in life.
You mentioned Reddit, for instance, you know, every person listening to this has turned to Reddit
to get some advice or recommendation and something new. So we tend to rely on this stuff.
As I say, it's almost like it's a prosthetic for thinking and along comes artificial intelligence,
which says I can do that faster and more conveniently and I can do it especially for you and I'll
package it up in this flattering and sycophantical language that makes you feel great and really smart.
How does that sound? And most people are saying sounds pretty good. These are the fastest growing
apps the most adopted apps in history. What's the downside? The downside is we adopt the systemic
basis of this and we accept the fact that misinformation, fear, racism, misogyny is more profitable,
more engaging, more generates more, quote, engagement, then human interaction, then you and I
have a chat. And so if we turn this conversation into what social media would amplify it as,
we would find whatever the one political thing is that we deeply disagree on and I could guess we
might, and then we would scream at each other. And, and, you know, the thing about politicians is
I don't believe that they've built this machine. I think they're just operating in it.
And to your point from 2015, it puzzles me that advertisers don't understand that they're
funding this hate machine. But they're, they're, they're always driving for truth, right? Because if
you think of what a brand is, a brand is a promise. So that means they have to establish some level
of credibility and trust with people who are purchasing stuff. So to promote a brand in an
environment that's a swamp of distrust and untruth, I find that very odd. But nevertheless,
it's happening. Steve, I spoke to you. But hold on, hold on, but it's not a, I don't want to say
that it's a foregone conclusion because I don't believe it is. And this is why I'm the reason I wrote
the book. And it's the reason I want people to read it, disagree with me, talk to me about it
because at the end of the day, we as customers get to say back to these platforms, hey, guess what?
Don't label misinformation as truth. Don't do it.
No, how do you think they'll land at OpenAI?
OpenAI has a problem. They've raised so much money. The thing that I think people forget is that
that money is alone. It's not a gift. And the people that gave it to them want it back with a
10x return. And I'd love to see a deck in which Sam Baldwin explains how the billions of dollars
they've raised are going to become trillions of dollars and what the make. Because he asks, he answers
that in kind of a goofy way. He goes, well, you know, it's hard to say. Could end humanity.
I think there is a very clear answer that nobody wants to articulate because you're saying,
what is the trillion dollar problem that AI solves? Given the valuations and the sheer amount of money
that's been put to work here, the answer has to be a trillion dollar answer. The trillion dollar answer
is wages paid to human workers. In particular, it's wages paid to knowledge workers.
That's the trillion dollar problem that AI is solving. Of course, no one's going to say that
because then there'd be an insurrection. People would be outraged. But it's hard for me to think of
what else is the problem that this thing is solving, right? The problem it's solving is that
we pay human workers a lot to do intellectual work. That's high paid labor. And AI is intended to
displace an awful lot of those workers, not all of them, but a lot of them. It's already happening.
So this is the other thing. I'm hearing a lot of Hollywood and Los Angeles. And so I'm paying
very close attention to how AI is starting to creep into the media business in various ways.
And it's biting off one chunk of business after another. So now if you're a voiceover artist,
you're going to notice that a tremendous amount of your work is gone because AI voices are
pretty good. AI newsreaders are very good. If you work in sound design, you're probably using
AI right now or you're being displaced by AI for production music. Things like Sun are very good.
And so these are not motion pictures. These are not TV series that we watch. But they're the
periphery of that. They're the things around it. And it started to chisel its way into the core.
And a big area of focus for me right now is how AI is going to displace the television industry
because that's another trillion dollar of business. So that's a great big pot of money.
And today it employs an awful lot of people. So most motion picture productions are heavily
labor intensive. And all of that labor could be displaced by AI. I don't mean to sound so bleak.
I mean, you're the guy wrote the book. But we do think about this. It's time it comes up every
week on this show. So it's almost like unavoidable. You know, what will AI do? And I get all these
sunny prognostications from futurists who are like, it's going to be great. You're going to be able
to kick back, go to the beach, take a long vacation because AI is going to be doing all the work.
And I'm like, how does that add up? Where does the income come from? I understand how that's
great for AI companies. Part of why I wrote the book away. I did. And part of why it's so,
I mean, there's a reason why the cover looks like this. Part of my note.
You know, the original, I wanted the book to be welcoming. I wanted to be curious.
I want people to think about AI in terms of their power to control it. And, you know,
I don't believe that anthropic and chat TVT are the only two companies in the universe.
And, you know, I just give you an example. But I would like to be able to say to the tool I'm
using, hey, here's what I'm researching. I'd like some sources of material on this topic that
come from credible trustworthy sources. And I don't want it to, like the thing that I'm shocked by,
in particular, which I TBT is, it will never say, I can't do that ever.
Right. It will just make shit up. Yeah. That's it. That's a decision.
It's quite interesting. Before we were doing this call, I was thinking a little bit about AI
in the context of the truth. And the one problem with the LOMs, the large language models that are
the basis of most of these chatbots right now, is that they're not entirely reliable, right? So,
you know, you can use them to get certain results. But you always know in the back your mind,
it's like having a really clever but untrustworthy assistant who can go fetch things for you,
but you have to kind of double check their work. And the only thing we can be certain about
with LOMs is that they will eventually hallucinate. When they get to their edge of their knowledge,
they're going to start to make stuff up. That's the one thing you can be sure of is it will start
to fabricate answers. It's not like lying, but it is like bullshitting in the sense that it just
starts to spew out answers for the sake of saying something and filling the void.
It's not deliberately lying, right? So, I don't think AI's are designed to deliberately deceive us
or mislead us. Or do you think that's the case?
I think different, let's get to politics because I think that's exactly where we've ended up.
At the end of the day, what the politicians have figured out, and by the way,
Democrat, Republican, to some extent, politicians lie. I mean, they mean commercials that are
but what in particular Trump has figured out is that he can flood the zone with so much information.
I mean, he kind of was a little embarrassed that maybe he put Obama's face on a gorilla,
maybe, but mostly not. Because his basic response to that whole
throne red meat in his face, and it works. They get excited about that.
So, I'll just give you an example. My wife showed me a video this morning. It was a dog.
They opened the door. The dog ran out into the snow. Then the dog turned around and ran back into
the house where there was a cat and a parrot waiting. And the dog leapt on and wrestled with the cat.
And I said to her, do you think that really happened? And she said, well, you know, I was pretty sure
it was real at the beginning. And then I was pretty sure it was fake when the dog came back in
because I didn't look real. And then I kind of decided it's lovely and funny and I don't really
care. I mean, she wasn't quite that case or a surah about it. But I think that if you're in the
entertainment business, the real challenge for you and your television friends is, does the audience
really care whether they're real people or not as long as it's entertaining, as long as it makes
them smile? We already know the answer, though. In regard to say, hey, I, we know the answer from
reality, so-called reality TV. This is in 30 years ago, right? It's all fake, right? It's all
manipulated. It's all scripted. It's all produced, right? So, so even if it looks real and it seems
real and they're not really actors on the screen, they're just real people like you and me.
It's all manipulated. It's all orchestrated. So that's the media waters we've been swimming in for
the last 30 or 40 years. That hasn't changed. But you can't put that at the doorstep of Open AI and
say, this is your fault. This is the, this is the mainstream media booth. As you said, supported by
advertisers, I think people are sure on reality. I think, I think, reality is hard for people to
deal with. And we're always craving for escape. I think, but I'm hoping that readers of the book
come to at the end is they're going to get to make choices. They're going to get to make choices
when they go to ask the CVS, chat GPT, a malgum thing about what they should take. They've got
a stuffy nose, what they should take. They could say, I want that app to show me what other choices
it did. Like, I want to see the decision-making process. And if you think about the difference
between chat GPT and Google, when you type in, hey, what do I get to my kid, my kid as a cold,
you see the answers. You can choose one, but you get to see what it pulled up. And increasingly,
now, all the stuff that's sponsored is the first page and a half. So you have to decide if you
want to look at the unsponsored answers. But I believe that we as a society can say to the
AI world, we demand transparency. We are not going to accept your app answering question
without showing us what the choices work. Okay, well, that seems like a pretty mild request.
And I think companies like perplexity are already trying to satisfy that. So in some cases,
the market is responding. It's working. Let's zoom out a little bit from AI and talk about
trust and truth in society. Now, my take is you can't have a functioning democracy if you don't
have some reliable source of truth, some objectively verifiable facts that everybody can reference.
But I think that has all been deliberately blurred. And that is why our democracy is dysfunctional
at this point. Tell me where I get it wrong if I get it wrong. No, you get it right.
So you think that there's been a concerted effort to blur the distinctions between reality
and falsehood. And that makes falsehood plausible, right? So the minute you can introduce false
falsehood and say, well, it might be like this. And we're in the realm of what you call subjective
truth as opposed to objective truth. But even objective truth, because look at our health,
you know, health and human services right now, they're doing their best to eradicate scientific
research or suppress it and promote things that are basically wacky theories from the internet
as valid health, right? So the blurring the distinction between pure reviewed scientific fact
or at least pure reviewed scientific theory and just cookie stuff from the wet, you know,
homeopathic remedies or remedies that are unproven and untested. So here we have a kind of a
concerted attempt to blur the distinction between objective reality and subjective reality.
I find that the health in particular, I find quite puzzling because if you remember during COVID,
the people that were talking about Iver Mechden or horse tranquilizers or whatever,
if you really dug down, mostly they were getting the actual vaccine. They were saying, well,
I mean, like they weren't really the best. There are a number of people who don't want to take
vaccines, right? And these are people who are frankly ignorant about the way vaccination works
and what the concept of herd immunity is. So they think they can opt out and they're going to be
okay. They don't realize that actually weakens the group, right? The herd, as they say.
I ran into this issue long ago and I worked for Oprah Winfrey. I was stunned because she had
anti-vaxxers on the show in particularly Jenny McCarthy. And I brought it up as an issue. I said,
this is a guest who is going to spread untruth. She's using your platform to spread this information.
And I got shot it down. I got shot it down by the executive producers. They were very resistant.
And the simple reason was because Jenny McCarthy pulled a rating. And somebody who's there
talking about vaccines did not pull a rating. So I guess what I'm offering here, Steve, is a little
bit if we're debating anything. It's whether this is the fault of AI or whether this is the
environment, the media environment that people like you and I have created and worked in in our
careers. And AI just fits into that as the next iteration.
So a couple of things. First of all, absolutely. Snake oil has always been snake oil.
There was always a seller of snake oil and always a buyer of snake oil. And at the end of the day,
the Oprah folks needed to make a rating and I get that. That is what it is. But I also think
the real question for these companies is what's the line between needing to hit a number and keep
your job and being irresponsible? And when do you hurt America? When do you hurt society?
At some point, spreading misinformation damages people.
And what I worry about with AI as the up until Donald Trump was a creation of television.
I mean, the apprentice, the character that he was on the apprentice was not a real person.
I mean, literally, the only thing he's done is his career, right? He wasn't a successful casino
manager. He wasn't a really successful real estate developer, but he's a very successful reality
TV host, reality in scare quotes. So here's why I think we're at a moment where there's real
opportunity for change. And that is, you know, we get to ask the information that comes at us.
Now that we know that it's being gathered and produced by a company, we can say to them,
you know, please show me, so I, for example, I've had conversations with chat GPT about
its feeling about Sora too. And we had a long two hour conversation about where did Sora
to come from? Was it good for the business? Why did Sam Altman greenlight it? You know,
and what I can't judge at GPT about their own products. Yeah.
Let me guess that it was pretty supportive and had a good spin. No.
Oh, so it was critical. I mean, I, by the end, I asked it flat out, is Sora too dangerous?
And it said, yes. And I said, in hindsight, would you, did, did it have any business purpose in
releasing it? And they said, no, it did not. Okay. So that, to me, sounds like spin control
from the press department, right? So what you're doing is it's flattering, it's flattering your
body. It's somehow is detected that that line of inquiry from you is critical. And therefore,
it's giving you, you know, it's fueling your fuel. It's manipulating you.
Hey, here's the last thing for you. I want to think, I want to ask you about it. So you work
with young people at the center, at the, at the center and you're focused on young people. And
a large number of young people have opted out of traditional media altogether. They don't watch
television. They don't know from cable television. Their source of news is TikTok. You know,
so CNN isn't really on the radar. Neither is Fox News. And a large and growing number of people
don't even spend time with that kind of linear media. They spend time in fictional worlds,
immersive game worlds where they can spend hours a day and they communicate with other people who
share that as a passion or as a hobby. And to my mind, that's like opting out of reality altogether.
You're opting to go into a fictional world and spend as much time as you possibly can. And while
I can totally understand that, you know, I've been in the game industry, I get that as entertainment.
It's highly fun and it can be addictive. My concern here is that an entire generation is being
conditioned to opt into unreality as a way to escape from confronting the hard truth of reality.
So it's a some extent in, you know, a consumer society. We're presenting people with two choices.
Choice A is entertaining and fun and flattering and I'm sequious and it's pure pleasure and it's
fake. And choice two is deal with the facts of the real world. It's a tough world. Politics is tough.
Business is tough. Society can be very hard. Well, that's no fun for people. So guess what?
There are everyone's heading for option A. What's your take on that? In our closing moments,
give me a perspective on that generational task and what the future looks like.
So not to disagree with you, but I have to at least in the Gen Z community that I spend my time in,
I find them to be deeply thoughtful, concerned, engaged, passionate, hard working.
All of the stereotypes that people want to tell me about how Gen Z is behaving,
don't show up in the, maybe it's a subset of the community, but I don't know what you were like
when you were 15 or 16 years ago. I was kind of a goofball. I mean, you know, I was not thinking
about world politics or health or environment. So I'm hopeful that given the right tools
and as they continue to understand increasingly their power, I mean, one of the things that we
haven't figured out both at the center and kind of in the world is Gen Z has enormous economic
power. And within my advertising friends, my advertising friends live in absolute fear
that the Gen Z community will decide they're angry at their brand for something they did.
Well, good. That's good. They should be afraid. That's very useful. So I think economic power is
what it's all about. I think that the idea that truth kind of hangs in the balance and that
we didn't, you know, when things seem too good to be true, they are. When the robot says to you
every morning, Rob, you're brilliant and handsome and smart and funny and the world doesn't appreciate
how wonderful you are, you should take that with a grain of salt because, you know, from now on,
I will, you know, when you go to the clothing store and the and the person says, oh, that looks
great on you. Oh, yeah, I have one just like you should know that that's bullshit. I mean,
they got a cell screen. It seems so believable. It does seem so believable. So with that, I would say,
the book is out May 12th. Right. Money back guarantee, buy it. If you don't like it, send it back to me.
I don't have it. I don't have it. It's a good journey. That's great. Steve, thank you very
much for joining us on the futurist this week. Folks, we've been listening to Steve Rosenbaum. Steve,
where can people find out more about you in the center and then tell us where they can find out
about the book? Sure. The future of fruit.com will tell you about the book and all of the links and
all of the work we're doing. And I'm, you know, I'm out and about and traveling and talking to people
and the center is the sustainable media, sustainable media dot center. And both of those sites tell
you everything you need to know about me. Super. Well, thank you very much for joining us on the
futurist this week. And good luck with the book launch. I'm a fan of already pre-ordered my copy.
Folks, the future of truth hits bookstores in May and mid May. And you can pre-order it now on Amazon.
I already did. And thanks for listening to the futurists. And next week Brett and I will be back
with another person who is thinking about and shaping the future that they envision.
Until then, we'll see you in the future.
Well, that's it for the futurists this week. If you like the show, we sure hope you did.
Please subscribe and share it with people in your community. And don't forget to leave us a
five star review that really helps other people find the show. And you can ping us anytime on
Instagram and Twitter at at futurist podcast for the folks that you'd like to see on the show or
the questions that you'd like us to ask. Thanks for joining. And as always, we'll see you in the future.



