Loading...
Loading...

For 50 years, the healthcare industry has been trying (and failing) to harness the power of artificial intelligence. It may finally be ready for prime time. What will this mean for human doctors — and the rest of us? (Part four of “The Freakonomics Radio Guide to Getting Better.”)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Over the past few episodes, in this Freakonomics Radio Guide to getting better, we've looked
at a variety of things that may produce a longer and healthier life, nutritional supplements,
faster drug approvals, figuring out the secrets of the gut microbiome, and today, in the
final episode of this series, we'll look at something that intersects with all of those
things and maybe a trillion more. Today's topic, how artificial intelligence will change
healthcare. And why is the healthcare system in need of change? If you look closely, you'll see
a bizarre split. The advances in medicine and medical technology over the past century
have been mind-blowing, but the way these advances are delivered to actual patients can be
also mind-blowing, but in a bad way. I have the ability to put a patient on heart-lung
bypass where their organs are literally failing and we're able to keep them alive. It's truly
some of the most ambitious technology humanity has ever created. And yet, the way that I find out
that someone had a heart attack is still through a pager, and then I have to go and say,
hey, who here is having the heart attack? The healthcare system has so much technology
slop that it can be hard to see just how good the actual medical technology is, but that may be
about to end. If you think about it, this is the biggest experiment in the history of medicine.
And the experiment is already underway. The moment where I feel like I'm really doing science
is when I genuinely do not know the answer to the question, but I know it's important to answer.
Today on Freakinomics Radio, AI and a giant leap into the future of healthcare.
This is Freakinomics Radio, the podcast that explores the hidden side of everything,
with your host, Stephen Dubner.
We could probably make 10 episodes looking at AI in healthcare, but if we want to do it in
a single episode, which we do, it's helpful to speak to someone who is able to frame the biggest
questions well, someone like this. I'm Dr. Robert Wachter, although he says we should call him Bob.
And I'm Professor and Chair of the Department of Medicine at the University of California, San
Francisco. And what is that job in tail? What that is is running a large department of about
1,000 doctors, everything from geriatricians and primary care doctors to cardiologists and oncologists,
and we do research, education, and take care of lots and lots of patients. You're still a
practicing clinician as well, is that true? Correct. About one month a year, I do this thing,
a field that I actually started called Hospitalists. So about one month a year, I take care of very
sick people in the hospital. What's your medical specialty by training? I trained in internal medicine,
then did fellowship training and epidemiology and policy and ethics, but I'm an internal medicine
doctor, which in the old days, meant you took care of patients and clinic and in the hospital,
and then in part because of the specialty that I kind of cooked up about 30 years ago,
those things have gotten divided, and we have separate doctors for the most part who take care
of hospitalized patients. That's what I do. And how did you become a hospitalist and then kick off
this field of hospitalists? Was it just because you were doing internal medicine in a hospital and
you kind of expanded that practice? Yeah, I had a boss who's a very smart strategic guy who
said the way we organize hospital care is the way we've done it for 100 years and that can't be
right. Let's think of a new way of organizing hospital care. Because at the time, the typical
model was your doctor who took care of your clinic, also took care of your in the hospital,
which makes some sense from a continuity standpoint, but just can't work. It's got a physics problem.
You can't be in two places at the same time. And if you think about it, the fields of emergency
medicine and critical care medicine didn't exist 50 years ago, then people decided there needed to
be a separate specialty with a specialist being a generalist who's a specialist in this place.
So we developed this idea of a separate doctor to be the hospital doctor and lo and behold,
it became fastest growing specialty in history. Within a few minutes of speaking with Wachter,
you get a sense of how his brain works. He is drawn to categorical sorting and operational
competence, all of which has been particularly useful in his latest extracurricular endeavor.
It's a book, his sixth, called a giant leap, how AI is transforming health care and what that
means for our future. When you look at health care, people sometimes say to me, why are you people
such Luddites? Are you kidding me? Come to a modern hospital. We have technology everywhere. Go
to the radiology department, cardiology, surgery, but we have not used general purpose technologies to
transform the way we do our work. We use it to transform the way we do a procedure or the way
we treat a disease and thank goodness for that because we're much better at that than we used to be.
So why is health care delivery still so sloppy? There are a lot of reasons that we are pretty static.
The fixed costs are very high to get into the business. It's almost impossible for a start
of to build and launch a new hospital. The incumbents are quite powerful, although you could argue
that's true for a lot of other industries, but doctors, nurses, et cetera are powerful.
The economics are really funky. If Amazon or Netflix or you name your favorite disruptor comes
up with a better mouse trap, the relationship is largely between a customer and the vendor and the
customer says, this is better or cheaper or whatever. I'm going to buy it. In health care, you have
this assorted mishmash of insurance companies, businesses, government, and also because health
care so important and we have the capacity to kill people if we don't do it right, it is highly
regulated, which is yet another barrier for innovators to come in and disrupt us. We like technology,
but we like it in very, very specific ways. We have not embraced it as a mechanism to make care
better and safer and less expensive. You call your book a giant leap. I want to understand
this concept of the giant leap. My sense is you're arguing that health care has failed to take
advantage of technological progress to the degree that most industries have and that you're hoping
that AI in all its many forms will help us kind of leap over that sluggish period into a next
better phase. Is that about right? Yeah. The quote I like is Hemingway's quote from the
Sun also rises now 100 years ago. One of the characters goes bankrupt and another character says,
how does a man go bankrupt? And famously, he says two ways gradually than suddenly. So that's us.
I mean, I think we have the gradually part down pat. We now have computers, which is great,
but we are the largest users of fax machines in the country. We finally have ditched the pages
after the drug dealers did. We're way ahead of us. So yeah, we are very sluggish in adopting new tools,
but we have gone digital. I wrote a book 10 years ago called The Digital Doctor, which was really
about our transition from paper to digital. That book is a very grumpy book. It's like how the hell
did we go from paper to digital and in some ways make things worse, in some ways make the lives of
both patients and doctors harder, just digitizing the record helped in certain ways. Got rid of
doctors handwriting, the kind of perennial joke. You know, when I do an electronic prescription,
it can land at Walgreens or CVS. That is massively better and safer. Two people can look at the
chart at the same time. There are lots of good things about it, but it was not enough to transform
medicine in some ways, as I said, it made it worse. The giant leap really is the combination
of the magic of the new AI, meeting a healthcare system that's in desperate need of change and
everybody knows it. We really are about to have our suddenly moment when healthcare is actually
transformed after tiptoeing our way toward this over the last 10 or 15 years to make it better and
safer, more accessible, more satisfying for everybody, both patients and clinicians. And I think
eventually less expensive, although that's hard to ask. My sense is that in writing this book,
you, a busy and accomplished person, decided to become even busier and accomplish something else.
And it seems as though you sort of got yourself a graduate degree in healthcare AI by speaking
with all these healthcare administrators, tech firms, investors, et cetera, et cetera, et cetera.
Can you just talk about what this journey slash process was like for you, why you decided to
undertake it and then who you actually did spend time speaking with? The things I was reading were
written by technologists, and I don't think they understood the big picture, the policy, the politics,
the economics. And so my wife, who's a journalist and writes from New York Times, said the only
way you're going to get this right is to do it journalistically. And I said, what does that mean?
She said, you're going to go and talk to a lot of people. Who did I talk to? I tried to find
interesting companies and interesting people doing cool stuff. And when I spoke to them, I asked
them, who else should I speak to? And they told me about other interesting people. I know the world
of clinical medicine well. I know the world of academic medicine well and medical education.
I live in San Francisco, so I'm surrounded by technologists. I advise a bunch of tech companies.
So in each of those areas, I knew a fair amount to get started and knew some of the players,
but I had to go deeper. The first chapter in the book is called an overnight revolution,
50 years in the making. Can you just talk for a moment about what happened during those 50 years,
the successes, the failures, and why it's been such a slow boil? Yeah, I can do it quickly if we talk
successes. And it'll take longer to do failures. Slow boil is a couple of things. One is people
are treating AI and healthcare like it's new. It is not. In the 70s and 80s, AI became a thing.
And there was a lot of interest in medicine and artificial intelligence. If you think about it,
what does a doctor do? What did I spend eight or 10 years going to school and residency
and fellowship learning to do? It's be intelligent to take a whole body of information,
symptoms and lab tests and all that, match it against a body of information, the medical literature
and textbooks, and come up with a diagnosis and a treatment. So AI was very exciting, but the AI
of the day was not ready for prime time for a few reasons. First of all, it was the old, if then,
AI, if a patient has a sore throat and swollen lymph nodes and a fever, they probably have
strep throat or mononucleosis. That works fine for very simple problems falls apart very quickly
faced with the complexity of real medicine. The second was all of our data was on paper.
Therefore, if you wanted to use these fancy new AI, machines you had to go to a separate computer
and type everything in. So both of those caused the field to flame out. And AI went away from medicine
from about 40 years. Was this imaging as well or no? It was early for imaging, imaging started
in the 80s and 90s. It was largely around the cognitive work of doctors. Part of the problem
was they started on the hardest problem and the hardest problem is diagnosis. I remember
speaking to one of the early leaders at the time who was a professor at Stanford, these are not
dummies. These are MDs and PhDs and computer science. He said, why did you focus on diagnosis?
The first thing to tackle. He said, we weren't naive about the complexity. It was just the most
interesting problem. You could understand that. These were innovators. They were at the cutting edge.
They really weren't thinking about practicality. That was an important lesson for today.
You don't start on the hardest problem and one with the highest stakes. And one,
if you get it wrong, you can kill somebody. You start on low hanging fruit. You need to get
buy in and get trust from everybody. Patients and doctors and nurses. I think we're not making
that mistake this time. But that flamed out. Then IBM Watson beat the jeopardy champions in 2011.
You may remember that moment when Watson, a supercomputer trained to play jeopardy,
competed against a pair of human jeopardy champions, including congeny.
Kathleen Kenyon's excavation of this city mentioned in Joshua showed the walls had been repaired
17 times. Watson, what is Jericho? Correct. 400 same category. This mystery author and
her archeologist, Hubby, dug in hopes of finding the lost Syrian city of Urkash. Watson,
who is Agatha Christie? Correct. Watson won $77,000 in that competition. That was a nice
payday. But of course, Watson caused billions to develop. And IBM had much higher ambitions for it
than winning a jeopardy. I remember watching that and thinking, well, we're all toast. And when
Watson then tried a tandem healthcare, which was the first industry that it tried to work on,
it completely flamed out. IBM did enter Watson into some high-end partnerships with MD Anderson
Cancer Research Center, for instance. But Watson just didn't try out to be very useful. Some of
its answers were obvious, others dubious. And it was very expensive. In the end, IBM dismantled
Watson, keeping some parts and selling off the rest. Again, not ready for prime time. And then
the sort of big deal in medicine was about 15 years ago, we all went from paper records to
these huge software systems called electronic health records. In 2008, fewer than one in 10
American hospitals or doctors offices had an electronic health record. By 2016, fewer than one
in 10 did not. In the space of a very short amount of time, we went from basically a paper-based
industry where the idea of using advanced data analytics and machine learning and all that
was impossible because all the data was on pieces of paper to an industry that had its information
in digital form. What was disappointing about that was many of us, including me, naively thought,
that's the ballgame. If we get our data in digital form, we'll be ready to innovate and do all
this stuff like Amazon and Netflix and Apple and medicine will be better and safer and cheaper.
It didn't work. The lesson that I took from that era was a term coined by Eric Bernjoffsen at
Stanford called the Productivity Paradox of IT. The idea that you take some fancy new information
technology, you bring it into an industry and snap your fingers and you will quickly transform
the industry to make it better and more productive. That almost never happens. Never happens.
Does not work and the paradox is it looks so good on the PowerPoint slides, the ads that were
used to sell it to us. It doesn't work and it doesn't work partly because the technology needs to
get better and all the iterative versions, 12.7 need to happen. But much more importantly,
the industry needs to transform the way it thinks about its work, organize itself, the culture,
the governance, and we didn't do that. In 2012, JAMA, the Journal of the American Medical Association,
published a crayon drawing. Probably the first time I ever did that. From a seven-year-old girl
who went in to see her pediatrician, what it shows is the girl sitting on the exam table,
moms next to her, sisters in the corner. And in the other corner of the room is the doctor with
his back to the patient typing away. It's a beautiful drawing. There's one thing that the girl got
wrong, which is she portrayed. The doctor is having a smile on his face. I can tell you that no
doctor was happy about being transformed into a data entry clerk. And patients noticed it. They
went to see their doctors and their doctor had the head down typing away. And why did that happen?
Because the computer became this enabler of all of these outside entities who used to have no
ability to influence what the doctor did, because I was scribbling on a piece of paper, now had a way
of making me check 12 boxes about, did I examine nine body parts? And did I ask you a few
wear seat belts? Do you exercise and all that? All noble questions, but now there was a forcing
function that you could make the doctor record all this stuff. And so people did. And importantly,
when we send a bill off to the insurance company, the amount of money we get paid is partly
related to the nuances of how I record the note. Which creates some perverse incentives right there?
Totally, totally ridiculous incentives to say the right words in order to get the best bill.
And then a few years after that federal legislation, mandated that patients could not only see
their basic information and maybe their medications, but actually could read my note
and see their x-ray results and see their lab results. There was absolutely no information to help
the patient figure out what any of that meant or even to make an appointment. Other than to maybe
forward the results back to the doctor and say, I'd like an explanation please, which just sludges
up your inbox even more. The companies did what seemed logical. They put a little button at the
bottom of the screen that said, send a message to your doctor. Low and behold, patients being normal
human beings, click that button all the time. Electronic health records have led to a huge increase
in what is called pajama time for physicians. We talked about that in an episode called The Doctor
Won't See You Now, number 650. The American Medical Association in a recent survey found that roughly
20% of physicians spend eight or more hours a week outside the office wrestling with electronic
health records. But it seems that a new day may finally be dawning. The first really widespread use
of AI and healthcare now and really the one that took over very quickly in a year or two is what's
called an AI scriber ambient intelligence. Every doctor at UCSF now has access to a tool where if
you're coming in to see me, I put my phone down on the desk, say, is it okay if I use this to create
my note, press a little button and it records our conversation. At the end of a conversation, I press a
button and there is your note. And this is not just a transcript. This is an assimilated transcript.
You might say, huh? A transcript would be worthless because you said, well, doctor, I'm having
chest pain and maybe 10 minutes later you would tell me you're having shortness of breath and your
right leg hurts. Those things go together in the note. They don't go 10 minutes apart in the note.
Maybe between them, you told me about your focaccia recipe or how much you love your grandchildren and
how Tommy soccer game went last week. That generally does not go in the note. So the note has to weave
all of that together into a template that we are comfortable with. And these tools now do it
extraordinarily well. It saves me maybe a minute of time, not that much, but more importantly,
I'm no longer that doctor is looking down at my keyboard during our time together. I'm looking at
you and really engaged in the conversation. This has really been the first AI tool that took medicine
by storm. And I think quite smartly on the part of the healthcare organizations, doctors and nurses,
but also the companies because it's an easy win. It's something that satisfies everybody. The risk
is relatively low. And for doctors, it's like, oh my goodness, I don't need to retire next year.
This time, in part because we've screwed up digital transformation in healthcare so many times,
I think everybody's coming in with their eyes open and a little bit more strategic. You don't threaten
the doctor saying we're going to take over your job. You are the doctor's friend. You're going to make
their lives easier and better. And you're not going to do anything that if stuff goes wrong,
you're going to kill somebody. That is a digital scribe. That is reviewing the chart for me. That is
helping me create my bill. That is helping a patient schedule and appointment. It's all that kind
of low hanging fruit. So does that mean that you, when you're seeing patients in the hospital now,
will take a chart and feed it through your favorite AI agent and ask for a summary and walk into
a room much more prepared? Yes, but not my favorite AI agent because one thing we can't do and
shouldn't do is take your medical record and stick it into a public version of chat GPT. Fair enough.
So what version do you use? We use, we're now about to, at UCSF, we have a partnership with chat
GPT. So we have a version that's inside our firewall. It's actually within my electronic health record.
I take a look at your record and I see it's longer than I have time for and I click a little button
and it will summarize a 600-page document in 30 seconds. The way it will summarize is 600-page
book in 30 seconds. Give me an example of how that has worked out for you so far. It just makes my
life easier and if I'm seeing you and you have a past history of having had a blood clot 20 years
ago and that's on page 397 of your 600-page record and I miss that. I may not make the right
decision about whether you need a medicine to try to prevent a blood clot if you're going to be
in the hospital. One of the points I make over and over in the book, I use Biden's old line,
don't compare me to the Almighty, compare me to the alternative. So even if this chart summarization
is imperfect and the data says right now, it's very good but not 100% perfect. I recalled in the
book a patient I saw a long time ago. The patient had a history of a pulmonary embolism, a blood clot
which is really a bad thing to have had and probably means you're going to be on a blood
thinner which can be dangerous for the rest of your life. I happen to have a few minutes before
I saw the patient. I'm doing a little head scratching like, that's funny, the patient had a
history of a pulmonary embolism, the patient had no risk factors, no family history, that's kind of
unusual. And so I'm flipping through the chart and finally I found where this history of a pulmonary
embolism which we often shorten as PE came from. The other thing we shorten as PE sometimes is physical
exam. And the patient 20 years ago had a physical exam which the doctor labeled as PE and wrote the
patient's physical exam under that. The next doctor probably in a rush looked, saw the initials PE
and on the patient's problem list now the patient had a pulmonary embolism and that stuck to the
patient like gum on a shoe for the rest of their life if I hadn't caught it. So for all our concern
about hallucinating or bullshitting by AI, human intelligence is quite fallible we should say.
Intelligence and time, this was not a matter of someone being not intelligent, it's just there's no
way to get the work done that needs to be done. Is it cutting down on pajama time? Absolutely,
absolutely. And in a way that is very meaningful for physicians to the point that it has led
them to be open to, okay that was great. What's the next thing?
Okay, so what is the next thing for AI in healthcare? We were just told in medical school
we can't detect these forms of cardiovascular disease using this test but we asked ourselves
could AI do exactly that? That's coming up after the break. I'm Steven Dubner, this is the
Freakonomics Radio Guide to getting better. If you haven't heard the earlier episodes in our series
they are sitting right behind this one in your podcast Q and we will be right back.
Bob Wachter, who is chair of the Department of Medicine at the University of California,
San Francisco, has been telling us that AI has recently been proven super helpful to healthcare
providers by acting as a digital scribe and cutting down on other paperwork. So that's great,
but how about some more ambitious uses of AI in healthcare? For that we will go to Pierre Elias.
He did research with Wachter at UCSF, took leave from medical school to work for an AI
healthcare startup, then went back and got his medical degree in 2016. And what is Elias up to now?
I'm a cardiologist at Columbia University. I'm an assistant professor in biomedical
informatics and I'm also the medical director for artificial intelligence for New York Presbyterian
Hospital. Okay, that's the part I want to hear more about. What does that mean exactly to be
medical director for artificial intelligence at a big urban hospital chain like that?
So my center develops, validates and deploys AI technologies to help us find patients with diseases
so that we can take better care of them. We run the largest cardiovascular AI screening program in
the country. How big is it and is it just your organization and all its branches or does it go
beyond your organization? The majority of our work happens within our organization. This is
eight hospital centers, 180 clinics in the greater New York area, but we really do try to make a
lot of this work exist outside of those walls. I co-founded a consortia called Train Cardio,
the Task Force for Research Advancement in AI and Cardiology. This is 20 plus institutions
around the country where we regularly validate each other's work, collaborate on large projects,
and when possible, freely share that information or that data with the world so that other people
can build upon it. When you talk about building out this network of people like yourself,
give an example of a particular type of project. A number of years ago, I got a call in the middle of
the night from an outside hospital and they said, we have a patient that we think we need to send
you urgently. This was a gentleman who had shown up three months before in their emergency
department and he had shown up with some chest tightness and shortness of breath. They checked him out,
they ran a battery of tests and they said, listen, you're not having a heart attack. We hear a
murmur in your heart. You should go see a doctor about that, but you're fine. You should go home.
He feels better. He goes home. He's waiting to see his primary care doctor. Two months go by and
he has an episode of the same sensation, this chest tightness and shortness of breath and he goes
back to that same emergency department, but this time he's in respiratory distress. They end up
having to send him to their intensive care unit. They have to intubate him and put him on a ventilator
and at that point, they do an ultrasound of his heart and they see that he has severe,
valvular heart disease. We have four valves in our heart and like the plumbing at home,
it can get rusty or leaky and if it gets really severe, it can become life threatening.
Columbia is a world-leading center in valvular heart disease and so they were calling
me in the middle of the night saying, we really need to send this guy to you as it absolutely.
He came and I spent the rest of the night working on him, but unfortunately by the morning,
he was in multi-organ failure. He arrested and he passed away. This was a gentleman who otherwise
had no past medical history was otherwise healthy and I will never forget having to sit down with
his partner and say, I'm so sorry there's nothing I can do for him. The thing I became convinced of
was if we had just known about this disease, we would have been able to do something about it.
He could have gotten a same day outpatient procedure and I think he'd be alive today.
And it's hard to imagine that you couldn't have known about the disease when he had been
A in an emergency department at a different hospital and then B sent to you. So how rare or
difficult to detect is this disease? This is the fundamental challenge in all of medicine.
As you can't treat the patient you don't know about. Oftentimes we're waiting until patients
develop symptoms, but for many diseases symptoms are a late presenting case. I became obsessed with
this question, which is we don't have a screening test for the most common cause of death in the
world, which is most forms of cardiovascular disease. And the reason for that's relatively straight
forward. The way that we diagnosed most forms of cardiovascular disease today is either too invasive
or too expensive to do at a population level. Too invasive would be what? Cardiac catheterization
where we poke you in your arm or you're growing and then we shoot some dye into the vessels of the
heart to take a look at them. How often is that performed on a healthy patient? Never. You would
never do a cardiac catheterization on a healthy patient. Patients would be presenting with symptoms
like chest pain or inability to exercise before you would consider doing a cardiac catheterization.
And the two expensive option then would be what? It would be an echocardiogram which is an ultrasound
of the heart. An echocardiogram costs a few thousand dollars. It's an hour-long procedure where they
look at the heart in a bunch of different angles. And so we end up in a situation where the way we
diagnose cardiovascular disease is either too expensive or too invasive to do at a population level.
So we wait until patients oftentimes present with symptoms which is late in their disease course
and patients have worse outcomes because of that. Okay, so all you need to do is what then?
All you need to do is magically find a way to create a cheap ubiquitous test that can screen for
the most common cause of death in the world. And I became obsessed with this and asked myself,
well, is there anything that we're doing today that could fill that role? And what came up with was
the humble electrocardiogram. So if you've watched any medical drama and you see the squiggly lines
in the background that go beep beep beep that's an electrocardiogram where we measure the surface
electrical activity of the heart. If you have an apple watch you can do a one-lead electrocardiogram
right now. We are taught from medical school onwards that you cannot diagnose those diseases
with an electrocardiogram. It's simply taught as not possible but we asked ourselves could AI do
exactly that. We built one of our first AI models and we were shocked to find that it worked
really well. We tested it on nearly 20,000 patients in a retrospective data set and we found
this AI model could outpredict me in trying to find valvular heart disease from the electrocardiogram
and then we asked ourselves what could we find all forms of structural heart disease from an electrocardiogram.
And that set us off on this journey of the last five years where we built this technology called
echo next which looks at an electrocardiogram and tells us does this patient have structural heart
disease or not. Did you identify that relationship or is it just that the machine learning helped you
find the cases in which one was related to the other? I didn't only AI did and for a long time we
thought one of the holy grails would be hey you know the AI would see something it would tell us
what it saw we could teach it back to doctors and then they would go and see that but it doesn't
work that way. Modern AI techniques still don't do a very good job of explaining what is it that the
AI sees and the AI does not think the way a doctor does a doctor is a very specific way of
interpreting this sort of medical data and we've shown in a series of studies and experiments the AI
doesn't care about that the AI is not thinking the way we think it's doing its own thing and we can't
fully explain what it's doing but what we do know is it can see things that we can't and it can
accurately predict patients who have structural heart disease much better than I can we took 13
cardiologists from around the country and we had them look at 3,000 electrocardiograms and then we
asked AI for these electrocardiograms does the patient have structural heart disease or not so we asked
a yes-no question random chance would be 50% the cardiologists were at 64% oh boy that's not very good
it's not very good but you know the cardiologist warned surprise we didn't think that the cardiologist
would do very good they didn't think they would do very good then we asked AI and it was 78%
the cardiologist wears far away from random chance as they were from the AI model and being able to
predict which patients had structural heart disease or not from their electrocardiogram did you then
use the two in tandem the AI and the humans we did and the cardiologist got a little better but
they weren't as good as AI alone the cardiologist with AI were 68% what's funny is myself and my
co-creator Tim Poruka we did the survey as well but we recused ourselves from the results and we
were no better than the other cardiologists even though we built the model wow the really shocking
finding though was half of the patients that the AI model thought were high risk for undiagnosed
structural heart disease don't go on to get an echocardiogram in the next year so what that
told us is there were smoking on evidence that there was a lot of people out there with undiagnosed
structural heart disease and this is like clinically significant disease this is the sort of stuff where
if I told any doctor I believe the patient had heart failure or severe regular heart disease we
would all agree that we need to clinically act upon this now and that led to us running something
called cactus this is the largest cardiovascular AI screening trial in the country it's happening in
eight emergency departments in the greater New York area any patient who shows up in any one of
these emergency departments if they get an electrocardiogram they're automatically being screened by the
AI model for undiagnosed structural heart disease one of the most striking was a young man who had been
exposed to smoke during the LA fires and then had recently moved to New York he had shown up with
some shortness of breath in the emergency department and they had presumed that this was asthma and bronchitis
and they had sent him home with an albederol inhaler saying follow up if you need to but nothing
else is necessary it turns out that equinex told us this patient was very high risk for undiagnosed
structural heart disease such a high risk that we actually got the procedure done urgently he had
severe heart failure and he was ultimately found to have a rare genetic mutation that put him at a
one in four chance of dying before it would have been diagnosed he ultimately underwent a heart
transplant and is home with his family today when you look at your colleagues who's embracing AI
or deep learning is you sometimes call it who's ignoring it who's maybe actively opposing it and so
on many people in health care they don't have an AI reluctance they have a technology reluctance
that is a learned response from previous data meaning anytime someone comes with some new technology
it often makes their life harder this new technology is supposed to help us optimize billing
but requires you to spend less time directly taking care of the patient this is why I think
it's really important for health care practitioners to be part of the process of creating this technology
do you feel like you're sort of at the front edge of a movement 1000 percent no one was talking
about this you know seven or eight years ago when I started doing it it's been 10 years of people
saying no that's not possible slamming the door on your face and very much having to take a step
out on the bridge while you're building it the moments where I feel like I'm really doing science
is when I genuinely do not know the answer to the question but I know it's important to answer
and if it works it would be fundamentally groundbreaking to the way we think about practicing medicine
coming up after the break as any new technology spreads there are the inevitable winners
and losers the tech companies are playing this as we have no interest in replacing the doctor we
really want to be a co-pilot we want to be your wingman but they obviously do want to replace
the doctor I'm Stephen Dubner this is the Freakinomics Radio guide to getting better we will be right back
many companies are investing many billions of dollars to build out new AI infrastructure in
health care for all sorts of applications clinical treatment and risk prediction drug discovery
revenue and staffing operations on and on there's also one massive incumbent to consider the
electronic health record company epic which claims to maintain at least one record for 325
million people here again is Bob Wachter chair of the Department of Medicine at UCSF and the author
of a giant leap how AI is transforming health care and what that means for our future epic one
this market this was a little tiny company started by Judy Faulkner in the basement of an
apartment off the University of Wisconsin she became the most successful female entrepreneur
probably in American history it's a really remarkable story I think they won because they were
the best the best was partly integration Judy's theory of the case was we're not going at bolt on
37 different tools by a bunch of different companies we're going to own the entire thing and that
is going to allow us to provide an integrated solution and medicine is so complex and there's
so many moving parts that if you don't have an integrated solution the things not going to work very
well I do wonder how much of this kind of sclerotic nature of the electronic health record market and
the attendant difficulty in using those data to move forward as people have been trying to do in
the past but not succeeding very much how much of that is due to the fact that epic already has a
lot of success by playing the status quo and that they don't have much incentive maybe to innovate
or to let others play with their data in a productive way I think all of that is true and generally
monopolies are bad I think it's going to become more true in the coming years because of AI
than it has been true over the last 10 years I don't think epic is the main part of the problem
up until now and the reason I distinguished the past and now is once AI became a thing
so much of our data is in the form of narratives in a medical record which until generative AI
you know was really unstructured data that was not useful you could analyze your hemoglobin
and your creatinine and an EKG finding but not my note which might be a page long of narrative
and try to sick the old kind of AI on this is a 62 year old man with the history of congestive
heart failure who comes in with shortness of breath and chest pain impossible the AI couldn't
deal with that so it was really not computable until recently now that it is and now that generative
AI creates the capacity for all sorts of magic the idea that epic is going to own the entire enterprise
and you're going to need to use epic built tools for all of the different use cases of AI and
they're going to be hundreds I think that's going to really slow things down and the government
is forcing epic to become more open and more amenable to bolting on third party tools so who do you
think will ultimately win the AI healthcare platform wars you think it'll be the big incumbents
Google and Microsoft do you think it'll be the well-funded AI startup so it may be someone
or something else I was on Google's healthcare advisory board when Eric Schmidt came in and said
this is one of the biggest things that people search on we're going to figure this out and they
started building a version of their own electronic health record about a year and a half later Eric
came in and disbanded us he said this is too hard for us and I said wow it's too hard for Google
that must be pretty hard Google Amazon Apple Nvidia Facebook Microsoft all have designs on healthcare
they know how important it is they know how big an industry it is they all seem to screw it up time
after time it's not that they don't have enough smart people or enough resources it is that so
much of healthcare is local baseball how payment works how doctors think how things are organized
they're just so far away from the day-to-day workflow the real winner here probably is epic
meaning that this advantage of encompassing of having all the data in it already and so far
they are building AI tools for basically everything they're probably not as good as a tool built by a
third party company that's focusing on this one use case but the advantage of having an integrated
tool and building tools that are good enough and if I'm a hospital I'm trying to decide do I wait
for epics tool all I have to do is turn a switch and it's on and I'm sure epics can be in business
in five years or do I buy a tool from this really cool new startup down south the market in San Francisco
but I'm not sure they could be in business a healthcare organization like mine their default
setting is to buy it from epic rather than to buy it from the third party I assume that everyone
in the world has tried or is trying to buy epic everyone in the world has tried to buy epic
and the answer from Judy Faulkner is now 82 is that it is not for sale she does not accept any
investment is there a next generation of Faulkner leadership it's probably an internal person
because the culture of the company is pretty insular but they seem like they want to remain private
for the first seeable future not just seem in her will it says that it must remain private it goes
to a consortium of current employees and her family and cannot be sold and how do you feel about that
Judy had a partner in the early days and the argument they had was over this he said we need to
accept VC funding we're not going to grow fast enough and she argued that we need to own the entire
process here if we're going to create this integrated system where all the pieces fit together
and he interviewed a couple years ago he said obviously she was right and I was wrong
they have been a massively successful company and the product they produce is quite good we use it
and I'm reasonably satisfied with I think over time there's no way that a single company can
produce the best AI tools for use cases that span the range from an AI scribe to an AI diagnostic
support system to an AI tool that deals with the insurance company to an AI tool that facilitates
clinical research is just no way that one company sitting on farmland 10 miles from Madison
can possibly do that their ambition is to do that I think the world is a better place if this is
more open to third party innovators bolting in and that's going to require more federal push to do
that because that is not in the company's DNA so what do you see is the proper role of government
in regulating AI and healthcare generally well I think that's one one is to create a level playing
field and to be sure that innovators have a fair chance to succeed in an environment where
there is a tendency toward monopoly in part because of this idea that one company owns all your data
the issue of regulating AI tools I have a chapter on it and I came up with the very unsatisfying
answer of this is really hard and our existing structures meaning the FDA or the joint
commission which currently accredits American hospitals are not fit for purpose for this tool the FDA
can regulate a new radiology tool because they've regulated devices forever they've regulated
pacemakers and defibrillators and they've regulated drugs they can regulate a tool that gives a
static answer the same answer three years from now that it gives today and whose implementation
the way we put it in at UCSF versus the way it's put in a rural hospital 100 miles from here probably
isn't going to change all that much about the way a torvostat and works on your blood pressure
or a CT scan reader works in giving you an accurate reading but most of AI is not like that most of
AI can shape shift tomorrow compared to what it did today most of what it's doing is giving answers
or insights based on the literature how is that different than regulating a textbook or a medical
so I don't think we have even begun to think about what the regulatory infrastructure looks like
to get this right I think the Trump administration is going too far in an anti-regulatory way
but I do think the risk of over-regulating this in the short term is higher than the risk of
under-regulating it there are sufficient guard rails in the health care system maybe with the
exception of direct to consumer use of these tools but in terms of my institution a place like
UCSF is a 15 billion dollar business that has a terrific reputation it's surrounded by a whole
lot of malpractice attorneys we're going to be really careful about bringing an AI into our system
until we're pretty darn sure that it's going to work and be effective and not hurt anybody
there are a lot of existing guard rails along with just professional conservatism
that I think if you try to have government regulate every single decision support tool
that's impossible there's no way they can keep up with the speed of innovation here so
I think we've got a lot of work to do to figure out the regulatory environment but in this short term
I think a relatively light regulatory touches the right way to go
so that's what Bob Wachter has been thinking about how to lay out a plan for a full-on merger
between AI and health care and let's say that merger happens what should we expect what are some
of the most wonderful benefits to us the users of the health care system I went back to Pierre
Elias to ask about that I see the ability for every patient to receive exceptional quality of
care and in a cost effective and affordable way we make sure that the right steps are being
taken for each patient so that they're being effectively triaged being screened for diseases
the way they need to and wherever there is uncertainty about what should happen next
we actually are able to quantify what that degree of uncertainty is and help patients navigate
through that care journey without second guessing what is actually going on how do you see the role
of the physician changing and I would assume this means that medical education should be changing
a bit too yes I think the role of the physician continues to be that of the person on the journey
with you I think about all of the scariest and most important things I've done in my life
and I think about the people who were there to guide me through it those people leave indelible
marks on you and help guide you through this challenging process it may not be that mechanically
they're having to do all of the work it may be that the important thing is helping you reflect
on what this journey means helping find the places where there are lingering questions and
uncertainty and helping prepare you for what the next steps are going to be I practice cardiology
in a different way than I was trained to because I believe we have to think about the way we practice
medicine in new and novel ways and AI is going to allow us to do that I think for now it would be a
mistake to take too much of clinical reasoning and the facts of medicine off the plates of trainees
that's Bob Wachter again because I think you could easily enter a death spiral where the AI is
better than the doctor because the doctors are getting worse you're talking about de-skilling now
yes I'm talking about de-skilling is the de-skilling argument which is that you know if physicians
continue to rely on technologies like AI they will lose the ability or the skill to actually do
what they used to do is that an argument being made by the physician in compensate to scare off
AI partly but not completely you know there's good de-skilling and bad de-skilling I will admit to you
I have de-skilled on map reading I can no longer read them out the question about de-skilling
in medicine is complicated there are parts of de-skilling for example the physical exam were
definitely not as good at us we used to be there are elders who lament that partly because the physical
exam was about its clinical value and partly about the laying on of hands and sort of the connection
between the doctor and the patient but I think it gets romanticized I would rather have a cat scan
than my lung exam to try to figure out what's going on in your lungs or your abdomen so there's
certain parts of de-skilling that just happened because the new technology is better than what we
used to do and you no longer need the technology last year a study published in the Lancet gastro
entorology and hepatology journal one of my favorite journals for light reading looked at this
question of potential de-skilling because of AI very experienced gastroenterologist who did this
procedure called colonoscopy looking up into people's colons were given an AI colonoscopy tool which
puts a little box around lesions inside the colon that it deemed suspicious and find some things
that the doctors will miss they had access to the tool for three months the doctors like did and they
benefit from it then the tool was turned off their performance on doing their colonoscopies fell
significantly after the tool was turned off these were doctors who had an average of 10 years of
experience doing this procedure so in just three months of exposure to this AI crutch they got
less good at this thing that they had been doing for 10 years these questions of how the AI
interacts with human actors in really complex systems where the stakes are high and the AI is
getting better every minute and the humans are not I think are really fascinating let's say that
AI in healthcare delivery continues to improve and augment the lives of patients and physicians the
way that you're describing in this book the way that you hope it does I'm curious what you see
that looking like and especially how the role of the physician changes we did an episode a few
years ago about what are called co-bots robots that are collaborative this was in nursing homes in
Japan they were these big physical robots that could help lift a patient clean and so on and the
finding from research around them was that the healthcare workers actually loved it a and b were
able to lean into what they as humans are good at which is dealing with patients on a human level
rather than just moving them around and getting them to the bathroom and stuff like that so if we use
that as the sort of model here let's say for a physician like yourself for maybe someone a
generation or two younger if AI unfolds the way that you're hoping it does in 10 years let's say
what does the physician get to do that maybe they're really great at now that some of the burden
has been lifted by AI I think the fundamental question of AI and healthcare is not creating my
node or reviewing my chart it is computerized decision support it is the AI helping me make the
best decision for us a patient based on things about you but also about the medical literature which
evolves changes very very quickly and the best decision is not only the one that provides the best
outcome but the one that's the most cost effective it's been said that the most expensive piece of
technology in a healthcare system is the doctors pen which of course it's no longer the pen it's
the keyboard where this really will have an impact is I'm seeing you in my office or in the hospital
and it's not like I now pull out my phone and say this is a 52 year old man who comes in with this
this and this the AI is already reading your chart it already knows all those things about you
and it's suggesting in real-time suggesting diagnoses and suggesting what the right test would be
and what the right treatments would be now do you need me in that setting I think so but we'll have
to see I think you need me to interpret all of this to be a tiebreaker when there's a tough call
to deal with some complex sometimes ethical issues to weigh your own preferences as a patient or
family there's a lot of complexity in this that I think goes beyond the kind of decision making
support that ways gave me this morning when I drove into the studio the tech companies are playing
this as we have no interest in replacing the doctor we really want to be a co-pilot we want to
be your wingman but they obviously do want to replace the doctor I think for the foreseeable future
the complexity of medicine the stakes of medicine the regulatory environment and periodically I do
have to say to a patient you have Alzheimer's disease or you have cancer I don't think patients are
going to accept the idea that a bot's going to tell them that but although you do write about the
fact that AIs can do better in the empathy realm than humans that was one of the shockers that in
those early years which is really only two years ago when we saw AI passing the medical licensing
board or doing well on really tough clinical cases it was like okay it's pretty smart and then
studies began to come out saying if you did a blinded trial of a patient actor being given answers
either by doctors or by AI they often preferred the answers from AI and the AI appeared to be more
empathic of course the AI has no empathy but it can fake it really really well some of my doctor
friends have been telling me that their patients will come to them having used a chat or some other
AI bot to go through some scenarios or symptoms or whatever and the docs seem generally pretty happy
about it and this is to me in stark contrast to a trend from 10 or 20 years ago where direct
advertisement began on television for pharmaceuticals where patients would come and say okay I just
heard about this thing I just need you to sign me up and give it to me and they feel now at least
this is for my friends that the AIs are becoming a pretty decent research tool for the patient to
then bring to them the expert I'm curious if you're seeing that how you feel about that trend generally
seeing a ton of it I mean they were doing a version of this with Google but these tools are better
and so the answers they're getting are better I think it's net positive I think anything that
democratizes healthcare is going to be good assuming the answers are reasonable and correct and
assuming that patients when they need to see a doctor still see a doctor and I think that's the
open question here okay Bob here's what you've said that you hope AI can do produce better outcomes
for patients lower costs and add some relief for beleaguered doctors and nurses then you say however
that the success of all this will depend on history politics economics pride regulations
leadership lawsuits guilds culture workflows inertia greed hubris vibes and zeitgeist as much
as biographical processing units diffusion models and neural networks in other words the tech
can work but then we get those layers of people who may feel that their realms are being infringed
so the way you describe it there it sounds like what some people like to call a wicked problem
which is basically unsolvable because there are so many constituencies and so many of those
constituencies have incentives that are across purpose with the other constituencies so when you
take a look at the big picture how much optimism do you have do you think that the upsides of this
technology will be able to be successfully integrated into health care delivery itself or
do you think that AI becomes yet another piece of the mess that is the US health care system
I would interpret that very very very long sentence as saying it's not just about the power of
the incumbents although it's a very real part of it it's about the complexity of medicine
it's about the regulatory environment which for important reasons says there are certain things
that we're going to restrict people's ability to do or require that a human does it rather than
about it just says that this technology can be really really spiffy and still not deliver because
so much of it depends on humans and their systems and their governance and their culture
and their own self interest I harken back to the old yogi bearism in theory there's no
difference between theory and practice and practice there is I think that's what we're going to see
in practice that's where the rubber is going to hit the road here I think it's going to be net
very positive patients think that too there was a gallop survey last year when people were asked
their attitudes about AI they were really negative about its impact on jobs on the political system
and I sort of feel the same way the one area they felt positively about it was in medicine
I think that's partly because the AI is really good and partly because the system is so screwed up
everybody recognizes that we are in desperate need of reform and health care
and our typical go-to response in medicine when we can't do what we need to do is we just hire more
humans a we can't afford it we're already 20% of the GDP and bankrupting businesses and people
and governments but b we can't even find the humans even if we could afford them at least
further foreseeable future this is going to help me do my job help me be more of the doctor or the
nurse I want to be help me focus on the patient so that leaves me optimistic over the next ten
years will there be jobs for doctors 20 30 years from now well unless I live to 120 that's no relevance
to me but you know I think there will be that again was Robert Wachter whose new book is called
a giant leap we also heard from Wachter's one-time mentee Pierre Elias at Columbia University
my thanks to both of them I learned many things in this episode hope you did too I especially like
learning a little bit about Judy Faulkner and epic now I'm hoping we can bring her on the show
sometime this is the final episode in our guide to getting better series let us know what you
thought our email is radio at Freakonomics.com where you can leave a review on your podcast app
also if you want to keep up with everything we do around here you can sign up for our newsletter
at Freakonomics.com or at stevendubner.substack.com coming up next time on the show for the Super Bowl
we will tell you why NFL running backs don't get paid the way they used to and then in a new
two-parter we will look at what it really means to cheat people like to call it cheating you can
call it that I'm not sure he was cheated but that's just what it was if you won the Tour de France
while doping but everybody else was also doping were you the cheater and what would happen
if there were no rules against doping my goal is to bring about the 10th age of mankind
the enhanced age where everyone has the opportunity to become enhanced that's coming up soon
until then take care of yourself and if you can someone else too Freakonomics Radio is produced
by Stitcher and Rendbud Radio you can find our entire archive on any podcast app also at Freakonomics.com
where we publish transcripts and show notes this episode was produced by Delvin Abouage and edited
by Ellen Frankman it was mixed by Jasmine Klinger with help from Jeremy Johnston special thanks to
Rochelle Wolensky for background research help the Freakonomics Radio Network staff also includes
Augusta Chapman, Eleanor Osborne, Elsa Hernandez, Gabriel Roth, Elarium Antenna Court, Teo Jacobs,
and Zach Lepinski our theme song is Mr. Fortune by the hitchhikers and our composer is Luis Guerra
what kind of cheese do you have on a bagel though?
Monster oh no kidding I'm a big monster fan too it's the best
the Freakonomics Radio Network the hidden side of everything
Stitcher
Freakonomics Radio



