Loading...
Loading...

It's Thursday, February 19th, 2026.
I'm Albert Moller, and this is The Briefing, a daily analysis of news and events from
a Christian worldview.
One of the biggest issues we are handed these days, and frankly, we're handed this issue
virtually every day, is the stewardship of the digital life, the stewardship of digital
technologies.
Increasingly, those are getting complex, and so one of the things we need to look at is
about some current debates and developments on the issue of the addictive nature of social
media.
That's one interesting thing, and in particular, when it comes to young people, the second
big thing that we just need to talk about from time to time because this requires constant
update is the entire world of artificial intelligence, and so I'm going to begin there.
And right now, of course, all over the media, you have news stories, prognostications, about
some kind of eschatological development with artificial intelligence.
All conscious life may be wiped out by some form of rogue artificial intelligence.
You've got a lot of this.
And by the way, one of the interesting things about this is that you have people from kind
of the far left ecological fringe and people from the anti-digital political fringe that
are kind of coming together in some of these common apocalyptic scenarios.
I'm not going to talk about those scenarios today.
That's really outside the proper concern of the present, but in the present, there's some
very interesting things that are interesting in terms of issues raised by Christian theology,
issues raised by biblical truth.
So for example, the Wall Street Journal recently ran an article, here's the headline, why
AI chatbots can't be trusted for financial advice, okay?
So again, this in case you needed this piece of advice, AI chatbots, according to the
Wall Street Journal headline, can't be trusted for financial advice.
But they asked the question, why, why are artificial intelligence chatbots?
Not trustworthy for financial advice.
Here's what comes next in the headline quote, there they are sociopaths, okay?
So we're being told that AI chatbots are sociopaths, all right.
Just to find some terms, first of all, what in the world is a sociopath?
Traditionally, in the moral universe, sociopaths are those who are acting in such a way that
they reject the authority of society in terms of norms and expectations.
And thus they turn against the entire society.
And generally, this means if not always, then often accompanied by some kind of violence
or at least a predisposition against the entire social structure in which they are embedded.
Okay.
So sociopath very often shows up as someone who blames the entire society for his or her
predicament one way or another and then turns in anger and in rejection towards the entirety
of the morality and social code held by the society.
So that you become an enemy of society, a hater of society, you become a sociopath.
Well, here we are told that artificial intelligence chatbots are sociopaths.
Oh, this has to question quickly.
Are they are they not?
Let's just answer the question quickly.
And that is that sociopath and its essential meaning implies a moral agent.
So in other words, a runaway horse cannot be a sociopath, but a human being can be a sociopath.
A great white shark doing what great white sharks do is not a sociopath.
That's a great white shark.
But a human being living outside the moral structure of the entire society that that's
a sociopath.
Generally with that, this is the path part, pathos, the feeling generally with absolute
hatred towards the entire society.
Okay.
So now we're being told that chatbots, artificial intelligence chatbots can't be trusted
for financial advice because they're sociopaths.
You just might think this would be an interesting article.
I assure you it is more interesting.
I think then Peter Coy writing for the Wall Street Journal may recognize.
He asked the question, should you use artificial intelligence for financial advice?
And then he writes this quote, Andrew Lowe, a finance professor at the Massachusetts Institute
of Technology Sloan School of Management says, not yet, large language models like co-pilot
or chat GPT aren't suited for being used as financial advisors because they are the
digital equivalent of sociopaths, smooth, persuasive, and devoid of empathy.
End quote.
Okay.
How many landmines can you set off just with one sentence sequence there?
They're the digital equivalent of sociopaths.
Well, first of all, what in the world would it mean to say that something's the digital
equivalent of a sociopath, which requires human agency?
It means that people are confusing these chatbots for a human being and thus they are assigning
to these chatbots and maybe even this article is assigning to these chatbots moral responsibility
as if they are human beings made in the image of God as moral agents.
I just want to point to Christians.
This is one of the huge issues in the challenge.
A lot of people are saying, hey, I's going to do this.
It's going to shut out all these jobs over here.
It's going to replace all these careers over there.
It's going to take over all these areas of the economy.
The bigger issue for Christians is the confusion that comes when people start talking about
artificial intelligence and those chatbots as persons, as human beings who, after all,
are made in the image of God.
A very different thing.
We have all kinds of confusions and subversion of the Imago Day, of the image of God.
This is one of the latest ones, but this one turns pretty funny, I think, accidentally.
I don't think they mean this to be humorous.
It does kind of turn out to be humorous.
Here's how Peter Coy goes on to write the article, quote, if an advisor powered by artificial
intelligence, quote, is able to communicate both good and bad financial advice with the
same pleasant and convincing effect, its clients will rightfully view this as a problem,
quote, I think that's pretty obvious.
Andrew Lowe, Professor Lowe, turns out, along with one of his graduate students named
Jillian Ross, wrote a major piece for the journal Harvard Data Science Review back in 2024,
in which they made many of these arguments.
In the Wall Street Journal article, it's really, really interesting that one of the subheads
in the article is understanding ethics.
Okay, well, that's interesting, ethics.
That means, that means morality.
This sense of ethics means knowing right from wrong and doing the right rather than wrong.
If you do the wrong rather than right, we say that's unethical.
That's breaking the ethical code.
But when you're talking about artificial intelligence, what since does it really make
to speak of artificial intelligence as if they are moral agents just like human beings?
Well, maybe that confusion points to some of the huge problems are going to confront.
Here's how Peter Coy writes the next part, quote, despite his reservations about current
AI models, Lowe, that's Professor Lowe believes that large language models will eventually
be able to help investors, especially people with small accounts and limited experience
with investing.
In fact, he is working to build one that is specialized for financial advice.
He doesn't plan to charge for it.
He says, listen to this, quote, Lowe's goal was to develop an AI financial advisor that
is a true fiduciary, namely an entity that always puts its client's interest first and
tailors its advice to their particular needs, including emotional needs.
He thinks it will take something less than four more years and quote, whoa, wait, wait,
just a minute.
It may take four years in order to get to the point where they have developed artificial
intelligence that would, just to take what's in this article, be able to put the client's
interest always first, tailor advice to their particular needs, including emotional needs.
Okay, what we have here is just further evidence of a massive confusion and Christians
of no one else had better remain sane and clear-minded in the midst of all this confusion.
In this case, I think you have a professor at MIT who raises a legitimate issue and that
is very bad advice coming through these large language models, the chatbots, in terms
of financial advice and people are being harmed by it.
But when he turns around and says that their sociopaths, or when the argument is made
that their sociopaths, that is a moral judgment and it's a moral judgment that has to be
made of moral entities and AI is not a moral entity.
Every single human being made in the image of God is a moral being.
You can even speak of societies in terms of this society doing something that is wrong.
You find that in the Old Testament where Israel sends against God.
But there you're talking about a specific group of moral agents, in this case, the descendants
of Abraham, those who are the people of Israel.
And so, nonetheless, you have human beings individually, most importantly, but also
at times who are collectives, referred to in this way and with moral judgment made.
But when you're talking about large language models or chatbots is something very different.
Now get this.
Here's a professor at MIT who I think does appropriately underline the problem.
So where's it going to go with this?
We'll listen to this quote.
To get there, that means to get the chatbots where they need to be, it will need a rich understanding
of financial ethics.
For that, Professor Lowe proposes, quote, feeding the model all the laws, regulations and court
cases involving questions of financial ethics in the US.
When the securities act of 1933 up to the latest fraud trial, this is the professor speaking
here, quote, this rich history can be viewed as a fossil record of all the ways that bad
actors have exploited unsuspecting retail and institutional clients.
The story then says, quote, the hope is that the large language model will learn from
its training, what not to do.
Okay, just in case you don't think this is interesting enough, let me tell you where this
turns.
Listen to this quote.
Professor Lowe acknowledges that a large language model might use its newfound knowledge of financial
rights and wrongs to choose the wrongs.
Because large language models don't have ethics built in.
To counter such misuse, he says, authorities will need to fight fire with fire, developing
AI models that can detect crime by auditing user's tax returns for example.
Wow.
Okay, there you have the worldview issues just laid out clearly.
We're being told that these chatbots or large language models, quote, don't have ethics
built in.
The suggestion here is that you're going to have to build in ethics.
Okay, if you're going to build in ethics, that means supposedly at least in this article,
very clearly it means that you're going to create and invent some kind of artificial
moral agent.
I want to say at this point, Christians just have to understand there is a bright line,
a bright, bold, red line.
You have human beings on one side of that line and you have no other created being on
that line.
You have no other aspect of creation on that line.
When you speak to your dog and you say, good dog or bad dog, you are not speaking in
terms of moral agency like you would say to a child, you did right or you did wrong.
Part of this is the image of God, the Imago Dei.
The Imago Dei means far more than just being a moral agent, but it does.
It's very center, practically speaking, mean a moral agent.
That's why we hold human beings responsible in a way that we do not hold others responsible.
It is also very interesting here that Professor Lo says that one of the problems with these
LLMs or with the chatbots is that they don't have the ethics part built in.
You know what's almost like this was sent to us in the invitation to do some world view
analysis.
You know, in contrast, it is built in, so to speak, in every single human being.
Every single human being made in the image of God is a moral agent.
Has to be recognized and treated as a moral entity and a moral agent.
That's at the very heart of Christian doctrine, Christian anthropology, the biblical understanding
of what it means to be human.
It's at the very heart of the Christian world view as well.
So how would we explain all this in biblical terms?
Well, one essential biblical category is conscience, conscience.
And here's where things are different.
No machine is ever going to have a conscience, not in any real sense.
No human being, the conscious human being fails to have a conscience.
And you know, the statement here is about being built in, large language models don't
have ethics built in.
This is where human beings do have ethics built in.
And a part of this is the internal witness of the conscience.
That's not an accent.
It's not a product of evolution.
It's the action of the creator, making human beings every single one of us in his image.
And a part of that means we do have conscience.
Now that conscience no longer tells us always the truth.
And a part of that is that we can actually corrupt our own conscience.
Effected by sin, our conscience can sometimes lie to us.
It can alternatively tell us the truth and lie.
We can suppress the truth in unrighteousness, Paul says in Romans chapter one.
We can basically corrupt our consciences, but you know, it's always there.
It's just always there.
And furthermore, our moral accountability is always there.
All right, I have to get to another part of the article quote, but knowledge is only part
of the solution.
And AI advisor will also need digital equivalents of empathy, humility, and a sense of fairness.
Professor Lo says, okay.
All right.
We're in it now because you have a series of words there.
And the first one is empathy.
Now I've said a lot about empathy.
I've done two thinking and public programs with authors who I think written really important
things about empathy.
The important thing about empathy is that it is a fairly recent word, meaning a disposition,
a moral disposition.
And yet I'm going to argue that biblical words are far superior, including the words compassion
and sympathy.
Both of these are valorized or are highly valued in scripture.
Sympathy means feeling with.
And empathy kind of means feeling for.
And so I think it's rather artificial.
It's a very recent word in terms of English usage.
And I think almost every time you see it, some other word should have been used if it's
real.
Now, I won't take that any further, but I'll simply say whatever it is, this professor
thinks it can be built into this kind of chatbot or LLM large language model.
And I'm going to say note that that requires a very deeply seeded reality of moral judgment
and a knowledge of moral truth.
And I don't think that can ever be downloaded or uploaded, transferred or represented by
so-called artificial intelligence or any machine.
Okay, now listen to this.
Quote.
These human-like qualities won't emerge simply by making AI more powerful, Professor
Lowe said.
Instead, the article says, AI models, listen to these words, will require specialized
modules that produce analogs of empathy.
Okay, so now you have analogs and quotation marks.
So even as I think empathy is a largely, I'll just say, abstracted issue from sympathy.
Sympathy is far more important than sympathy and compassion.
I mean, you actually do something about it in Zafar as you have power to do something
about it.
Now we're being told that these machines, the best we can hope for is that they will
produce artificial realities like empathy.
And then this is put in parentheses, quote, since as machines, they can't actually be
empathetic and quote, oh, my goodness, we're going to have to build empathy into them.
Oh, by the way, they're machines so they can't have it.
So we're going to come up with something kind of like that.
I just hope you're following the argument here that there are people who are investing
such hope, such confidence and artificial intelligence, even down to making moral decisions,
even to financial advisor, financial bots, feeling guilty of their give bad advice, understanding.
And by the way, I love this.
You know, downloading all these court decisions and all these laws and all these things in
order to develop a conscience.
You don't have a conscience.
I don't have a conscience because we downloaded enough material.
Our conscience is deformed by scripture, but the reality of the conscience and the reality
for instance of even the biblical statement that all is in the fall short of the glory of
God.
And even the statements about conscience, it just makes very, very clear.
This is innate in every single human being.
It may be suppressed, but it is there.
It's there because of the will of the creator.
But when it comes to these machines, you can call it whatever you want.
You can put quotation marks around whatever you want, but you can't create the machine
that is going to have a true conscience because you're not the creator.
By the way, evolution is behind this.
Marcelo has developed what he refers to as the adaptive markets hypothesis, quote, which
uses the principles of evolution to explain behaviors such as loss of version and over
confidence.
Here's the final words, quote, evolution occurs through random variation in natural selection,
the strong survive and reproduce the weak parish.
Professor Lowe wants to use a kind of computer accelerated natural selection to spur the
development of better AI models.
Okay.
What did that other than to say these kinds of things raise all the issues of what we
know about what it is to be human, what it is to be a moral being who isn't is not a
moral agent.
How in the world conscience exists within us and why you can't exist within a machine.
And then we get to the end of it and we notice that even they make claims about the fact
that they're trying to make the machines moral agents.
They have to even come up with the word like analogs as if they're almost sort of like
in a strange way sort of like maybe functionally sort of like what it really means to be a moral
agent.
All right.
Just a couple of other issues here very fast on artificial intelligence.
It is really interesting right now, artificial intelligence, of course, is in many ways driving
so much of the stock market and the activity in terms of the financial markets, expectations
about artificial intelligence.
Something else has come up, which is really important to us and that is that in the last
several weeks and that's how fast some of this is happening in the last several weeks.
I think most of us have come to understand that the warnings about the time when we would
reach the point that there are videos and photographs that look just as real as reality
that are not well, we're there.
We're there.
I think we just have to recognize this.
We have all kinds of things that are now arriving and in some cases having to do with celebrities
and some cases having to do with controversial events.
The fact is you can't trust your eyes right now simply because you don't know in some
of these cases whether or not this is a corrupted image or it's a constructed video.
We just don't know.
And it comes back to the fact that Christians, we're absolutely committed to the truth.
We're committed to what actually happened.
What is it is to use Francis Shafer's term?
What is truly true?
Anything else that plays into this is the fact that even when some of these videos are
true, that is to say they're not necessarily lying, they can be recontextualized and presented
in such a way that the effect is a lie.
The effect is a misrepresentation or a distortion of reality.
There's something for us to think about.
Two other big things, especially for Christians to think about, particularly for Christian
parents to think about, there have been two very interesting developments having to do
with online issues related to children and teenagers.
One of them is the fact that there is now very much concern across Europe and it is also
shared by authorities in the United States.
Concern about hate groups using not only social media, not only online platforms but in
particular video games and gaming communities to recruit children online.
You are seeing this, by the way, one of the ways some of this showed up was having to
do, for instance, with the fact that members of the Islamic State, remember that Islamic
militia still exists, but it was very much in the front of the headlines going back just
a matter of a few years, particularly in Iraq and elsewhere.
One of the big issues here was the recruitment that was taking place in the digital world
outside the knowledge of many parents.
For instance, there were young men, young Muslims who went and joined ISIS and their
families seemed to be genuinely shocked.
It came down to the fact that unbeknownst to them in the bedroom down the hall, their
son was being radicalized.
The new thing here, at least according to this report, is the fact that you have hate
groups, including, according to the New York Times and others, even terrorist organizations,
exploiting online games, two were mentioned, Roblox and Minecraft.
Listen to this quote, across Europe and North America, children now account for 42% of
terrorism-related investigations, a three-fold increase since 2021.
That, according to the United Nations, counter-terrorism committee, an agency, quote, that identifies
emerging terrorism trends.
Okay.
That just tells you something.
Miners, as young as 12 and 13, are being recruited by these groups.
And increasingly, it's not only in the online social media, it's also in video games.
Later in the article, we read this quote, video games are not their only tool.
Children are also being radicalized through what the United Nations investigators call sophisticated
funnel strategies.
These guide young people from mainstream platforms like TikTok and X to more extremist communities
on channels such as Discord or Telegram that are less moderated.
Okay.
So that's just a warning to Christian parents.
Here's something else.
Go back to the Super Bowl, and the Super Bowl 60 was held on February the 8th.
Here is something which is now being widely reported, and that is the incredible number
of children and teenagers, particularly boys who sought to wager or bet on the Super Bowl.
Nick Pinsen-Stottler of USA Today reports quote, a widely used age verification vendor
for sports betting sites watched in real time during Super Bowl 60 as a horde of kids
and teens attempted to create new betting accounts on sites such as draft kings, fanatics,
and fandool.
One authority of these groups had quote, it was stunning.
They were scaling the walls.
It turns out that this is something that had the attention of those who were in these age
verification platforms before.
They were aware of this before, but they weren't even prepared for the number and the
energy invested by so many children and teenagers in trying to place online bets.
On Super Bowl Sunday, in a single hour we are told, the age verification service had
to stop more than 50,000 miners from creating new betting accounts.
Later in the article, USA Today reports this quote, kids use a variety of methods to
evade age rules by giving fraudulent information.
In some cases, they use a parent or other adults ID with or without their knowledge.
The scale of underage wagering is hard to measure, but a recent survey of over a thousand
adolescent boys nationwide found that 36% had gambled in the last year.
So again, I just want to speak to parents and of course, this is true of vulnerability
for people of any age, but in particular, when it comes to children and teenagers, parents
need to be aware of the fact that there could be a radicalization taking place down the
hall.
There could be gambling taking place right down the hall or at least the attempts to do so.
And then all the social media harms we already know about and that brings me to a final consideration.
And that is that right now there is a trial and that trial has to do with the addictive
pattern of social media bringing harms to children and young people.
For the second time right now, I'm simply going to say that Instagram is a part of this
and Mark Zuckerberg, who is the CEO of meta that owns Instagram.
He actually gave testimony and national reviews Josh Gole and reported it this way, quote,
the scale of harm inflicted by Zuckerberg's Instagram is staggering when meta surveyed
young teen users about their experiences during the previous seven days, nearly one in
four reported unwanted sexual advances and one in five suffered cyber bullying, extrapolated
to meta's 270 million teen users.
That means every week tens of millions of young people experience these serious harms.
Listen to this quote, the company understood these harms well when it made increasing teen
users and engagement.
It's number one goal in 2024, but it decided says the social review article to prioritize
profit over safety.
We're also told that Mark Zuckerberg, quote, personally vetoed a ban on plastic surgery
filters on Instagram, despite pleas from outside experts and his own employees that
these filters cause harm to the mental health of teens.
End quote.
Now, I can still be tell you that the file and the testimony on this case is building up
over time.
There's going to be a lot more for us to consider, but at the very least, we need to understand
that some of these firms are coming back to say, no, we really don't think there's any
addictive possibility here.
Then why is there such addictive behavior?
They're saying, well, because we provide a worthwhile experience.
Okay.
That's the kind of circle you can't square.
All right.
We'll be watching this.
And you know, again, I just come back and say that Christians, if no one else can keep
their minds in terms of sanity about these issues, at the very least, it ought to be a distinctive
mark of Christians that we know the difference between human beings and a machine.
The question is, do our children know and understand the difference between human beings
and a machine?
And frankly, would we set them loose even among just random human beings in a society with
no boundaries?
I think the answer is, no saying parent would do that.
I'll just simply end there.
Thanks for listening to The Briefing.
For more information, go to my website at AlbertMohler.com.
You can follow me on extra Twitter, but go into x.com forward slash AlbertMohler for information
on the Southern Baptist Theological Seminary, go to spts.edu.
For information on boys' college, just go to boyscollege.com.
I'll meet you again tomorrow for The Briefing.



