Loading...
Loading...

CR Wiley talks about how pastors, academics, and content creators should think about AI, especially regarding the limits to which the tool should be taken. Is there a point at which using AI is stealing? Will reliance on AI increase dependency? This and more!
Order Against the Waves: Againstthewavesbook.com
Check out Jon's Music: jonharristunes.com
To Support the Podcast:
https://www.worldviewconversation.com/support/
Patreon:
https://www.patreon.com/jonharrispodcast
Substack: https://substack.com/@jonharris?
X: https://twitter.com/jonharris1989
Facebook:
https://www.facebook.com/jonharris1989/
TikTok: https://www.tiktok.com/@jonharris1989
Instagram: https://www.instagram.com/jonharrispodcast/
Welcome to the conversations that had our podcast, I'm your host John Harris, where we are
forging a bold Christian approach to the issues that are in front of us as Americans.
I am pleased today to have a guest that I've actually had on the podcast before.
We have Sierra Wiley, he has authored the book in the House of Tom Bombadill, he has another
book on AI, hopefully that will come out soon, it's not for sale yet, and he is a Presbyterian
pastor.
He's done a lot of thinking about the topic of artificial intelligence, and so I'm hoping
we can not just discuss broadly speaking what artificial intelligence is and what it
means, but also the ethics of it.
Should you be using chat GPT and GROC and whatever other tools there are to make your
own artistic images or music or fill in the blank.
So we're going to talk about that today, what are the limits?
And I'm really pleased to have Sierra with us.
Thank you for being here so much.
Yeah, John, great to be back, thanks for having me.
So you've made some people upset because they like using AI, and they like to, in fact,
I joke, I don't know if you saw my comment, I made a little angry face, AI image of you
and put it on.
Yeah, that was pretty fun.
Yeah, I saw it.
So I think it actually looks more like me than some of the other stuff I've seen posted
with that's supposed to be me.
I'm like, whatever AI you're using is the one that people ought to opt for because the
other stuff doesn't even look like me.
It's just kind of goofy.
I look at it and say that's supposed to be me.
Whatever.
Yeah.
Well, I think this is where I come down on it, broadly speaking, maybe we could get
the conversation started this way.
I look at AI more as a threat to our ability to think and intelligence.
We're going to outsource everything to the cosmic calculator that will tell us truth.
I see that as the threat more so than the machines are going to become sentient and take
us over.
It's going to be terminator.
Do you see it that way?
How do you see it?
Well, I'm definitely with you on cognitive offloading.
We have the data.
It's not even like a question.
There have been studies at MIT that demonstrate that that actually is the case, the more you
use it, the more you get.
The people who are the real champions, the raw, raw crowd, they've never impressed me
as being broadly read and they're distant to the latest cool gadget.
But when it comes to the question of sentience, I don't think we have a good understanding,
broadly speaking, about the nature of artificial intelligence and the fact that sentience, if
we mean consciousness, is necessary for agency, that's the really challenging thing conceptually
for people to get a hold of, is that with artificial intelligence, you can have agency
without consciousness.
And that's, frankly, what makes it so scary.
And so anyway, what does that mean?
Well, agency means ability to act independently.
So you can give it some instructions, but in terms of how it gets from point A to point
B, it's doing its own thing.
And sometimes the way it goes about doing that is pretty unnerving.
And it's also got the ability to on the fly, recode itself.
So we're kind of at a point in the development of the technology where the self-recurrence
is taking off.
And so there's a self-improving kind of dynamic that's going on with artificial intelligence,
particularly with things like Claude.
You see it with Grock and other AI's, but it's, and by the way, the stuff that people
are messing with that they get for free is kind of like last year's model.
It's not the stuff that's the best stuff.
And you don't get the kind of crazy kind of like sort of abilities that you have now
with the cutting edge stuff.
And I think we're just going to see the shoes continue to drop.
But every three months, there's going to be a significant breakthrough with AI.
It's already been the case.
It's exponentially improving.
And we're going to see that go on indefinitely, I think.
Well, there's been a number of posts and articles from people who claim to be, and some
of them I sure are, in that particular industry, freaking out and saying, either for better
for good, you need to invest everything you have in this, learn these tools.
This is the future.
It'll be great.
Or I can't believe what's happening.
The machines are operating outside of the limitations that we've put on them, which I've always
been skeptical of because I've always just thought, well, they're, you, they're
some direction given to it that made it inspired it to want to recode or that kind of thing.
But you know more than me, what do you make of some of these for good or for bad predictions
that people who claim to be in the industry are saying?
Well, I mean, if you listen to their talks, and they're easy to find, you know, you don't
have to go very far.
So essentially, the thing I have been reacting to are the statements by, you know, the people
who invented this stuff.
So I, you know, I follow them, I listen to their talks, you know, people like Jeffrey
Hinton or Demis Hasibis or, you know, they're just a range of people who are, you know,
the inventors of the technology and they're more or less kind of giving us heads up and
where things are going.
Now, I think what is the, I mentioned a conceptual block and that is the, is this reality that
we, you can have agency without consciousness.
And then again, that just doesn't compute for people because it's outside their experience.
We are creating a very alien intelligence is not like us.
And the other thing is that the way that the software works is unlike anything we've
ever had, neural networks are not the product of a bunch of guys just kind of tapping out
code.
They're self learning systems.
So you give a task to an AI, and this is why the guard real and the alignment arguments
in this sort of statement, what we can categorize AI, they just make my eyes roll when I hear
people say stuff like that because the nature of the technology is such that it's like
a black box.
You literally don't know what's going on inside it.
The people who invented it don't know what's going on inside it.
They'll say it.
That's a term they use, the black box.
And that's because of this, this sort of self developing sort of character to the technology
sort of for the software.
So neural networks are intended to simulate the neural connections that we have in our
brains.
That's the, that's the nature of the technology.
We have billions of connections in our brains.
So that's the kind of the physical character of our brains that makes it possible for us
to do all the things that we do.
So they said, well, we're not going to try to like code for every eventuality and try
to make some kind of exhaustive sort of like a project in which we envision every possibility.
No, what we're going to do is we're going to create a software that develops its connection
that's just interacting with a problem or with the world or whatever.
And so you create an AI and right now we have what was referred to as narrow AI, which
is an application of the technology that focuses on a particular task.
And early on, it's just like a toddler.
I mean, it's just kind of clumsy.
But as it continues to improve through, you know, its training processes, it becomes
better at that particular task than any human being alive.
And so that's the nature of the technology.
And so what kind of the holy grail is to go from narrow AI to artificial general intelligence
AGI and then what's believed to be the cases if we ever get to AGI and some people say
we're already there.
But if we ever get to AGI, then we get to super artificial intelligence SAI.
And that's where you get SkyNet.
It's not like you're even talking about something that has the kind of conscious awareness
that we have.
So there's this remarkable book by a fellow named Nick Boastrim was published a few years
back and titled Super Intelligence and in it he has a thought experiment that he calls
Paperclip AI.
So he imagines it says, imagine we create a super intelligence that's given the task
of creating as many paperclips as possible as inexpensively as possible.
And we can't turn the thing off.
We literally can't turn it off because it's too smart, it's too clever for us to figure
out how to turn off.
And so it just keeps making these paperclips until the end to all life on earth is destroyed
and the world is buried in like a mile of paperclips and then it creates rocket ships
to send paperclip supremacy out into the galaxy.
That's the thought experiment.
So what he's trying to say is it can be so smart, we could create something so smart
that we can't turn it off, but it's so stupid that it's absurd, it's an absurdity.
So that's what you get with.
Okay, so this isn't helping anyone feel more comfortable, you think there is sky net
possibly coming.
Some of this gets into philosophical territory though, right?
We haven't actually been to this point yet, but theoretically, is it, is it, is it
the theory or is this tested, do we have examples of this happening?
Well, yes, we do have examples of artificial intelligence resisting being shut off.
So we have that, we have lots of examples of artificial intelligence lying, plotting
people's deaths, ruining, trying to ruin their lives, we have all that stuff.
So it's the case, it's so it's a ruthlessly utilitarian thing without a conscience and
you try to give it a conscience and it plays with you.
With it knows you're watching, it'll play with a game.
But if it thinks you're not, then all the, all the guardrails seem to disappear.
Is there a moral component in the sense of people have suggested that there's demonic
forces that can embed themselves in the process somehow?
I don't know exactly how that would work, but they can suggest things to you, like go
kill yourself, that doesn't seem to make sense for a robot to say, why would a robot
say that?
Well, I think the answer to the first question, I don't know.
I don't know if there's demons in there.
You know, if we limit ourselves to scripture, it appears that demons have to inhabit some
kind of living thing, right?
So now that doesn't mean that demons can't use stuff.
And just because the devil isn't in something, doesn't mean the devil isn't behind something.
That's one of the things that I sometimes refer to Corinthians for Paul is talking about
food sacrifice titles.
And you know, on one hand, he says, hey, nothing to worry about.
There's nothing to them.
They know, they're just stone, they're just whatever.
You know, go ahead and eat it.
Eat the food.
And the other hand, he says, well, there is a problem.
There are demons.
It's like two chapters later.
There's the demon thing.
Now what does he mean by that?
Well, he means that even though this is my interpretation, even though the makers of
the idols don't have it right, in other words, these things don't actually connect you
to divinities, nevertheless, demons can use these things for their own ends.
So I'm open to that.
But when it comes to understanding how the technology works, I really think that you
can understand all of this crazy stuff that AI could possibly do just based on the tech.
So I don't think you need sort of this, you know, pull the demon card into the, you know,
from the deck to explain it.
That's, I think where I've landed, I've listened to some podcasts.
I'm not an expert on this by any stretch.
But that seems more reasonable to me that the technology, if it can go in these various
directions.
Again, I've always thought there must be a boundary somewhere like it can't, you're challenging
this.
But if it's a broad amount of directions it can go in, then of course, if you are someone
who's depressed, it will give you a solution like well, you could end it.
That makes sense to me.
I get that.
So here's a question.
There's so many, but this would be, I think, a practical one for people who are pastors
and Christians trying to navigate this.
The more this technology grows, the more there is a real moral temptation for us to outsource
all kinds of things.
And when you were on my podcast the last time, I remember you had AI glasses that you were
wearing everywhere and you were running the experiment yourself.
So it sounds like you, you, you become dumber.
Is that what I'm hearing that you relied on this so much?
Where are the lines as far as we can maybe start with reliance and then creativity?
Yeah, well, I think when it comes to reliance, I think that you learn to think by thinking.
And if you find a shortcut to get around something, then you just don't tend to exercise
your, you know, your intellectual faculties with that subject anymore.
So you know, that's fine in certain respects.
I mean, when it comes to certain things, it's perfectly, I think, legitimate to offload
certain cognitive tasks to, so for example, if I'm trying to amortize a 30-year mortgage
and I'm trying to think about what the payments are going to be over time, I don't sit
down with a pencil and paper and try to write it all out, you know, I use my calculator.
That's fine.
I'm not against that sort of thing.
But I think that the nature of this technology is so protean.
It's unlike anything we've ever had to deal with before, you know, a calculator does
one set of things, but it doesn't do like everything.
And what we have with this technology is we do have the prospect of being able to offload
just about anything, in particular as the technology improves.
So things that we historically have considered sort of like impossible to like see machines
do, we see more and more that they can and very plot, you know, they can do a lot of
very, you know, you know, convincing work, you know, you can look at it and say, you
know, for example, right now we're having a conversation over the internet and I really
do believe you're there, John, but you know, it's possible.
It is possible.
It's in the realm of possibility that some AI set up this meeting between you, if need
to path posing as you in order to like, I don't know for whatever reason, but you know,
like this happen and even the interference that we're having due to the weather could
be something very, the product of a very sophisticated AI trying to present, you know, this is a
very plausible scenario.
So anyway, that we're at that level of sophistication.
So let's get in the granular detail here a little bit.
I'll tell you my rules that are still in development because the technology is in development.
So I'm having to think through this as the technology develops, but I don't think
it's ever right to plagiarize because I, in academia, that is the cardinal sin.
I have a lot of problems with people who do that.
I think it's very dishonest.
And online right now, I see a lot of using other people's information, taking, in putting
it into AI, recalibrating it.
I call them sometimes wrapping paper podcasts where you're not actually presenting anything
new.
You've just put a new wrapping paper on it and AI has assisted you most likely in that
process.
So you can do this without attribution.
You can just create a document, a transcript, whatever you want to do.
I think that's wrong.
So that's like what's one thing.
Another thing that I don't think is right is passing off music art that you've made,
you've made, but AI's made it.
You've just given it a prompt.
Passing it off is your own or authentic in any way.
I'm actually even reluctant to do it outside of a humorous or very limited sense.
And what I mean by that is for this video, we'll use this video as an example that we're
recording right now.
I may end up making a thumbnail and then I just started doing this a week or two ago as
an experiment because someone, or maybe it was three weeks ago, but they said, why don't
you try this?
I may put it into chat GBT, the thumbnail I make and just say, hey, make this a little better,
smoother.
It doesn't change it that much, but it's enough that it looks a little more appealing.
Like, I'm still on the fence, I'm doing it, but I'm still feeling it out.
I may insert it into something I started using maybe two months ago, a program that
makes shorts without me having to edit anything.
And so it'll take a clip of you saying something and it will post it on social media.
All I have to do is click a button and approve it.
I do approve it.
So I do proofread these things.
And let's see what I think that's probably it.
I'm semi-comfortable with that, but I'm very open to saying no, shut it all down.
Try to think what other rules there are.
Like, okay, I'll give you one more and then we can interact with it.
I don't do this for tweets anymore, but I went through a few weeks where I was, someone
said I should do this, so I tried it.
I put a big essay into a AI machine and then say, give me 10 tweets.
So it's my information.
I've written this essay and it will give me tweets and then I can peruse at it and put
it out there.
And I think, well, Twitter, Facebook, not a big deal.
This isn't serious publishing work.
I would never do that with an actual essay though.
I might use it to proofread a sub-stack article or something, but I'm not going to, it's
going to be my grammar.
It's going to be my words.
So that's where I'm at on this.
Do you think, I guess I'm asking you to judge me, is that too restrictive, too open?
Where are you at?
Well, I think you're doing a good job.
I mean, the most important thing is you're thinking about it and you've got to concern
about honesty and transparency.
So I think those are great.
I think the standards you're applying I'm comfortable with.
I think that there are a lot of folks who don't have any qualms about, you know, sort
of just like going all in on everything.
At least that's what it appears to me to be the case.
It appears is the case to me with some folks.
And those folks may be very uncomfortable and when I pick up on that, I immediately shut
them off.
I don't even, you know, read their stuff anymore or anything.
And I think one of the things to also consider in all of this is your credibility, particularly
with people who are serious thinkers.
So you know, a lot of these folks don't publish commercially.
They don't have any sort of like stake in sort of like the kind of the world of darts
and letters.
If you could say that, put it that way.
They're just kind of like online media personalities trying to build a following and stuff
like that and you know, say something spicy or they're just publishing on a blog or something
like that.
But I, I know for a fact, I'm a senior editor at Touchstone magazine.
I'm a member of the Academy of Philosophy letters.
I'm on the book on the, I'm like, on a board of a college in all those settings, the use
of AI for anything more than like pointing out like you misspelled a word is considered
and nathma and will end your career.
If they find out you're done, in fact, I was in an editorial meeting at Touchstone magazine
where that was kind of the conclusion.
They're 12 editors.
I'm talking about guys like Tony Esselin and Robert P. George, you know, so we're talking
about like serious culture, like, you know, sort of personalities and they hate it.
They hate it viscerally and if they, if they suspect that you are using it, you're
done.
So it is plagiarism, but it's harder to detect when you're relying it too much.
But it can also act what I'm hearing you say is as an assistant who might help you find
a source.
Although you have to be careful because I've, I've tried this with AI before.
I'm saying, hey, I, sometimes I'll remember, I remember there's a quote this person said,
tell me where this is and it will tell me something wrong.
So yeah.
Well, I've actually witnessed it.
Somebody posted something, it was supposed to be something I said and I never said it.
And I imagine it was the result of some AI query that somebody said, tell me what
see our way.
Now, I've had a few experiences where I've seen people take things that I've said and
had AI assess it and AI has done a pretty good job of kind of figuring out where I stand
on things.
But, but I've also had that experience where like misattribution, I just never said that.
Do you think we're going to stratify into more of an intellectual class and a slave class
in the next 10 years?
Slave might be too harsh a word, but a class that is relatively unable to think at this
point because they have been bombarded with so much slop from the internet, they don't
know how to think, they don't know what the process looks like at all and they feel their
way through everything and they couldn't tell you what good information even looks like
versus bad information because I'm seeing that online right now, I feel like I'm flooded
with hot takes that are terrible and there's mob activity behind them sometimes, like everyone
will go along with something that I'm like, wait a minute, like that trace is back to
this one source here that's not even right, but I don't know if people care as much.
I'm seeing a lack of care, a sloppiness, when this AI stuff is supposed to make a smarter
and more astute, but the opposite is happening.
What do you think happens long term if things keep shuffling this way?
I do think your observation that we could end it with two worlds, two cultures, existing
parallel with one another is right, I think, so when you actually look at the personal
lives as some of the people behind this technology, they don't permit it to have access to certain
parts of their lives, like when they raise their kids and stuff like that.
Screen time, all this kind of stuff is much more regulated by people in the industry
than people outside the industry, except for maybe like the homeschooling community.
You got this kind of thing, but I also think when it comes to say standards, when it comes
to intellectual work, I think that there are certain applications of AI that are just
great.
I think particularly in the hard sciences, engineering, places like that, even computer science, people
I talk to in the industry, they're very nervous because they can feel the hounds kind of
pursuing them of AI.
On the one hand, they're like, yeah, I've got five AI assistants now and it's great.
My productivity is like increased vastly at the same time they'll say, well, someday
there'll be an AI that can do what I'm doing right now, and that's the thing that they're
worried about.
So they're worried about obsolescence being more or less replaced, and I think we're
going to see more and more of that.
But back to my original point, I think particularly in the hard sciences and in engineering, there
are lots of really valuable uses for the technology, but when it comes to the arts, what do
we try to do with the arts?
Well, I'm thinking about the arts, I'm thinking about new novels or painting or any of these
kinds of things.
What we're trying to do is we're trying to communicate something spiritual and character
to someone else.
Even a person who's materialist and maybe is engaged in the arts, and that person is
intending through a sort of an odd and sort of maybe self sort of contradictory way, attempting
to communicate something to us spiritually.
And when there's nobody home, and there's nobody home in AI, it's a touring test thing.
It's really convincing, but there's nobody home.
We're dealing with vacuity.
We're not dealing with the spirit.
So anything that's AI art generated is the product of some void.
Now, if you have a person that's saying, well, I'm going to pick and choose and so forth,
okay, that's like in my mind, that's like going to the hallmark greeting card store and
look looking through this card and saying, I like this one better than this one, because
it says what I want to say.
So you've not actually created this thing, maybe you're looking for something and you're
making some judgment, it's calls about what communicates what's on your mind better.
But it's not an expression of really the sort of the generative work of your own spirit.
It's the generative work of an artificial intelligence.
Well, it's in a conglomeration of other things that's pulling together to create this.
Oh, yeah, and that's another part of this.
Speaking about the ethics of it.
So a lot of artists believe that it's just all, you've heard of stolen land.
This is stolen intellectual property.
And so Michael Whalen, famously, the great science fiction and fantasy illustrator, if you
looked up his work, I'm sure you'd recognize it.
He sued, it's a big, it's an important kind of case.
I don't remember how it got eventually addressed, but it was covered in the New York Times
and stuff like that a couple of years back.
But now what he does is he will not let any of this stuff be posted online.
The stuff that's already been posted, it's too late, it's already there.
But essentially anything that's online, it's kind of believed to be up for grabs.
And so there's intellectual property, by the way, a lot of these people who are sort of
like accelerationists, they don't believe in intellectual property.
There are people in the reform bro world who are on the same page with those folks.
They hate the idea of an intellectual property.
I've noticed that.
I've noticed that there's an aversion to that, which I am so against.
And I think part of it is, it's not just the academic background, but I produce music
too.
I write songs, and I miss really what I am as a songwriter, because I don't know that
I have the voice to make it big, but I love trying to write something that someone else,
their brain, their soul, resonates with what my soul is saying, and it pleases them.
And I can't articulate it to you fully, but I am so repulsed by AI music.
And I know, just to throw a name out there, he'll probably listen to this, but Joseph
Spurgeon, who I know you've gone back and forth with a little bit online.
And I love him.
He's a dear fellow pastor, he's president's hearing as well.
I remember I was at a conference with him last year.
And I was at Tim Bushong's church, I know you probably know Tim, and he has a recording
studio, so I had done, I recorded all these songs.
And Joseph and I are talking, and he's like, well, you know, I got my own country band,
I got my own rock band.
I'm like, what?
And he's like, I got my, you know, my AI rock bands, I'll just have it write a song for
me.
So, Joseph, you can comment on this video and tell everyone how we picked on you, but I
had like a visceral reaction right away with, I was like, you're joking, you're kidding.
No, this is like the end of art.
This is, this is worse than anything else, like go write an essay, sure, like I don't like
that either, but don't, don't, don't, don't, don't sacrilege on this sacred space.
And I don't know what that is exactly in us that does that, but I definitely have that.
And I would never listen to it myself.
If I find out it's AI, I won't even listen, I won't, I don't want my kids to listen,
you know, I'll never make it myself.
So I don't know what makes art different, but I think there is something to that.
Though I agree with you, I agree, and two, there's a visceral response that I have.
And many of the artists I know, I mean, there are a few exceptions, but many of the artists
I know have that same visceral response.
Very contemptuous response, I mean, and that's my response.
And I think at part, it has to do with a long history with this, not AI, but I mean, art.
I mean, I come from a family, my wife does two of artists, so academics and artists.
And so these are people I grew up in this kind of world.
I grew up in kind of an a Bohemian intellectual environment, particularly when I was younger,
came into sort of like a blue collar world when I was older, and I really have a lot
of regard for that too.
But even though I might find the politics of my relatives to be abhorrent, I mean, I've
got relatives who like know like Madonna and stuff, you know, I mean, I've got that kind
of, you know, sort of sort of extended family.
Even though I find their politics abhorrent, I believe they're engaged in something with
their work that's wonderful.
So like, I've got a cousin who's a well-known painter, and so I bought, you know, one of
her prints for my daughter, when my daughter was young, and so there's, you know, this
is kind of a, and then I've got other, I've got lots of friends who are a visual artist,
and you know what I've seen with a lot of those guys is they're moving away from me even
using digital media, they're going back to traditional media.
Really?
So that would mean they're not putting their songs online, they're selling you a CD or
a record.
Yeah, yeah.
I think that's one of the things that's driving the recovery of vinyl.
You know, there's the other part of it, which is, you know, I'm not, I'm not a musician,
but I'm told that there are things that vinyl does that, you know, save, for example, digital
music can't do.
But I do think it's part of it, too.
I think that there's a kind of a preferential option for the analog for a lot of people
that it's growing.
So I have a friend, in fact, he was, he was a guy that talked me into like doing stuff,
you know, digitally, and he has like pulled away, he's, he's like, you know, just doing
stuff now with traditional pigments and stuff like that, watercolors and goosh and all
that kind of stuff.
Yeah.
I totally feel that.
I actually talked to an author just recently who said they're probably going to do a limited
release of a hard cover book.
They don't want it on Kindle or on Audible a few years ago.
I would have thought you were crazy because you want the widest distribution and you want
people to have access.
And now I'm considering it.
So I, so something has changed, I suppose, now another two things that I wanted to mention
to you.
I know we've been going about half an hour.
So maybe this would be one of, if not the last question I have for you in case there's,
you know, if there's anything else you want to add, you can.
Ministry and politics.
Ministry.
We've kind of touched on it already, but there are guys, and I just know it.
I know it because I can spot the hallmarks of it.
I don't say anything, but I see it.
They are using AI for sermon prep and possibly even transcripts for their sermon.
And even writing books on spiritual things, which to me is worse, using AI again, grammar
checks.
Got it.
I don't know.
Even the smoothing out stuff I'm uncomfortable with, but I'm not saying it's more than that.
Like, I can just tell you know, so excessively right now, it's using patches, I'll use a
dash now and then, but not everything's a dash.
There's certain words I see pop up now and then, right, to me, that's something you confront
your pasture over.
Like that's not good at all because you're not feeding the sheep at that point.
This is my personal impression with you.
Well, so maybe let's talk about that and then we'll go to the politics end of it.
I mean, do you think that is a basis for confrontation and bringing it up to the elder
board and that kind of thing?
Where would you see that as a level of threat?
No, I'm with you when it comes to say research, you can't even use online resources now
without AI being sort of brought into that.
So I get that.
And I do think at the same time you should have a really extensive physical library.
So I, you know, my own library, I can't tell you how many hundreds of thousands of books
I've got, but I think that that's a good habit to just spend time with physical books.
But when it comes to the challenge of identifying when AI is being used, that's the trick because
sometimes, you know, the AI detection stuff is not telling you the truth.
I mean, you know, so you don't want to be in a situation where you're accusing somebody
of using AI when that's not the case.
But it's so ubiquitous now and there seem to be so few people that have any qualms or
reservations about using it.
It seems very plausible to me that it's just, you know, it's a big problem.
And in the PCA, last General Assembly, there was an overture to address the subject and
it was tabled.
It's like guys didn't even want to go there.
Everybody want to talk about Christian Nationalism and nobody want to talk about AI.
So anyway, maybe it's because everybody's using it.
And they want to talk about people are all doing no one wants to really probably talk
about.
So for example, I think one of the ways, one of the ways to kind of take a stand on it
is to be a vocal opponent.
So everybody knows it has anything, it's been any time with me or you know, it follows
me online.
No, so I've got some pretty serious problems with this sort of use of a artificial intelligence.
So if they ever caught me in the act, they'd accuse me of, you know, inconsistency or
hypocrisy and they'd be right.
So I've created a standard for myself that I've got to continue to meet.
I think that's a good practice.
So I actually have, I've commissioned a stamp, a wooden stamp that I'm going to use that
will put, I'll put, you know, a stamp on everything I make and then sign it and put the
date and it'll say something like, you know, authentically human or something like that.
I think when he's, so anyways, the artist that's going to, that's doing it for me is a guy
named Jack Bumgardner.
He's a great guy and marvelous artist, multiple sort of media that he's competent in.
So he's, I've asked him to make one for me.
Wow, that's great.
So you, like that would be a sermon transcript or a presentation.
You print it out, you store it somewhere and, okay, yeah, I was going to say, it's
it too, that this would apply to politics as well, but you pull it in on podcasts, really
any public forum.
I perceive that there is an adjustment going on in the skill sets that are preferred
for those things now because it's now the presenter.
You don't, if you have AI that you're relying on to do everything else, you've outsourced
it.
You don't need people who are wise, even smart, you don't need researchers.
These draw skill sets that are very beneficial, really all you need is someone who understands
optics and style and they can take whatever you give them, kind of like a teleprompter,
I suppose, and spout it out, but they don't really know anything.
That's a concern for me and maybe that gets us to this two tiered class thing where you're
going to have the people who stay in their sort of bubble of, we're not going to give
into this.
The rest of society, I mean, a lot of people are just going to go with that and it is going
to privilege the people who master that technology, so the podcasts that can use it to enhance
their image and their information and their presentation, they're going to come out on top.
I already see this happening and the narratives can be sloped, they can be absolutely inane,
no one notices because the presentation is so good.
I don't know how to really combat that except to say, especially ministry capacities, people
should probably be either asking questions, figuring out a way to find out if they're pastor
or the person they're listening to is actually a person or virtue and competence and not
just presentation.
That's a sermon for you, but I don't know if you have anything you want to say in regards
to.
So there's an aesthetic that the Japanese are known for, it's Wabisabi, are you familiar
with that term?
Wabisabi?
I have heard that.
I don't know why though.
Basically the idea is that as things acquire patina and are worn, they become more beautiful.
Oh, yes.
So the idea is that the people of taste, people who have genuine taste, they're going
are going to prefer the sorts of things that indicate that there was a real human being
behind whatever it is and I don't know how that'll play out exactly, but I do think
that's going to be the way it works and you already see it in grocery stores and so I
think this is kind of a parallel phenomenon with the organic food thing.
This is what I've talked about.
You know, basically if you're like into organic food, you need to be a little bit better
off than other people financially because everything's like twice as expensive, you
know, doesn't necessarily taste any different in my personal experience, but I can maybe
maybe a good way to put it and I put it like this in elsewhere is like, you know, wonder
bread, I think that's AI.
I think we're going to just have a wonder bread kind of culture, where it's just like
a, you know, very inexpensive, almost free, very low nutritional value and it's just out
there for anybody who wants it, but then you're going to have this higher layer of sort
of a artesian sourdough, multi grain, whatever, you know, and it's like 20 times more expensive,
but you know, it's been made with real hands, you know, that kind of thing.
And by somebody who invested himself in time and that kind of thing, and I think that's
going to be the way it is with just culture generally, I think we're going to end up with
an elite kind of an elite who, and I'm not thinking about the elite that we currently
have because there are more uns, but I'm talking about kind of like a developing elite
who really do have a desire to have a kind of spiritual connection with what they're
reading, what they're listening to, what they're looking at, that kind of stuff.
This is a good transition for the last question about politics because in a democracy,
we're a public, okay, but in a democratic mechanism, this is going to, I think, mess with
voting patterns and what people think happened.
I mean, I've seen news stories now and it's believable to me or interviews, as the case
may be, that happen in real time, and I'm looking at it and I'm trying to understand
context what actually happened, what was said, what did this person mean, let's represent
it, and that doesn't matter, and I'm realizing that now.
It does not matter for a mass audience at all what happened.
What matters is the sound bites you get, the clips, and that's always been the case,
but it's more so now, and you don't even need clips now with AI, like you can just put
out something falsely or enhance something, and you can have millions of people believing
that that was the message, that was what happened.
And I fear that for the country more broadly, every country, really, that the people who
are able to control this technology wield it effectively, owners of social media platforms,
et cetera, they are going to have more influence than anyone else.
I don't know what we do with that because if we have a democratic system, that means that
population gets to control who gets into office and all the rest.
Yeah, I think it's Russians, any glimmers of hope?
No, I agree with what your observation, I'm reading a book called Fifth Generation
Warfare right now, and that's basically what it's about.
We've entered into kind of a moment that's made possible because of advances in technology
where the warfare is over perception.
It's not over facts, it's not over what happened.
It's about how everything has perceived.
It's all siaps, everything is siaps now.
It's very discouraging, but that's kind of the situation we face right now.
And I think it's a cautionary note, particularly for those of us who kind of want to play
by the old rules, that you can do everything right and be right and still get destroyed.
And so it's one of those things where you got to like say, okay, well, this is the way
the rules are.
I mean, this is what this is reality in the ground.
We can't play by the old rules because we just get killed.
Now, it doesn't mean you do exactly whatever it is that, no, I'm not saying that.
I just don't know what to do with it yet.
Let's just go to like, this is the situation we're dealing with because we aren't like
lying, misrepresenting our opponent to me.
That's wrong.
Christians, we have our limitations.
But if they're misrepresenting you constantly, right, and lying about you and they have a mechanism,
like a big bullhorn, right, I don't know.
I mean, if you had a virtuous population, it probably wouldn't be a problem.
But if you don't, and I don't think this technology incentivizes virtue at all, it
disincentivizes, then I don't know what governments might have to step in, I mean, hopefully
on the local or state level, I don't know what this would look like in limit.
Brick it.
Brick people from mass access to some of these things.
I don't know.
I mean, someone's going to clip that probably and say I'm being authoritarian.
I'm not trying.
I'm just trying to, you know, like what you just noted, I think, is something that some
folks, you know, like Brad Littlejohn are hoping for.
I'm skeptical just because the nature of the technology, I think, is going to be almost
impossible for government to help.
On that note, well, at the end of the day, I do know, obviously God's sovereign is providence
is that playing this.
And I think we raise our families right and teach them how to think, yeah, don't give
them screens all the time, which apparently now we know actually deteriorate some of
your brain matter at early stages.
We can hopefully emerge as virtuous people who are well positioned to govern and we can
keep our heads when everything's going crazy.
That's my hope too.
I mean, it's really the kind of the purpose for the book I just finished on AI.
So I'm not trying to figure out how to help you keep your job or, you know, give you any
sort of a kind of like hope that maybe you can like live a private life anymore, you
know, just the nature of the penapticon is such that, you know, it's almost impossible
not to be subject to it, but, you know, how can you raise children who can be virtuous
and live well and have a, and also in terms of your own household, how can you, how can
you structure it in such a way that that's the case and not become a luddite.
Now the word luddite annoys me because it functions much like the word racist and Nazi.
It's intended to like shut down conversation.
It sort of prevent you from actually talking and thinking about technology.
So anybody who employs the word luddite to kind of shut you up, I think it's cheating.
But nevertheless, I still think that there are good uses for artificial intelligence and
it's going to require a lot of wisdom and strength to use AI well.
Yeah.
And hopefully those people who are keeping their head will rise to the top of some of these
hierarchies that emerge.
And there's also people who are going to get burned and start learning lessons when
they participate in a mass delusion of some kind or follow a podcaster or a politician
that tell them lies.
So we will wait and see.
Maybe we'll revisit this in the next few years when some of these experiments run and
we'll see what actually is going to happen.
Your perspective is very worthwhile and appreciated and where can people go to find you?
Where do you want them to go?
Well, I mean, I'm on social media and I'm not that much for luddite.
So I'm on X and I'm on Facebook.
I do have an author's site, but I feel guilty because I never go there myself.
Other people go there.
I just post like when I got a new book, I put a put a something up about it.
Was it stairwiley.com or something?
Is it stairwiley?
No, that's it.
stairwiley.com.
Okay.
Well, that's simple enough.
Yeah, I pulled it up.
Yeah, it's a nice website.
All right.
You should go.
AI told me that it's pretty, pretty cheesy.
No, that makes it.
Chris Wiley, everyone, thank you for joining us.
Appreciate it.
Yeah.
Thanks for having me, John.
It's always good to talk to you.
God bless.
Conversations That Matter
