Loading...
Loading...

I'm Mary Ann Kobasek McGee, Executive Editor and Information Security Media Group.
Today, I'm speaking with Dave Bailey, who is Vice President of Consulting Services
at Healthcare Privacy and Security Consultancy Clearwater.
We're going to be discussing issues involving the use of AI in healthcare.
So, Dave, as we know, healthcare organizations are adopting AI tools for clinical, administrative, and other uses.
But many are also eyeing AI to supplement or enhance their cybersecurity efforts.
What are you seeing on that front?
Mary Ann, thanks for the question.
And I think what we are seeing as a major necessity inside the healthcare sector is to be able to look at the tools that are providing the safeguards in order to protect your organization from today's cyber threats.
And the only way that those tools will be effective is if they are on an AI journey themselves and a lot of the primary and major systems that are providing endpoint protection and security protections,
all are embedding artificial intelligence into their ecosystem in order for them to be faster, more efficient, and to keep up with the threats of today.
So, when you start digging around into that, what specifically are they doing with AI when it comes to those security functions or controls?
One of the things that artificial intelligence is really good at, and AI can be very powerful when you define a use case, and then you can develop and use artificial intelligence to that use case.
And I think what most folks can understand is that the human is going to be a lot slower than the computer is today, and to be able to look at volumes of data and to be able to digest that and look for detections and indicators of compromise.
And then also, once you get those indicators of compromise, if you are under attack, everything about today is how quickly you can respond.
So, artificial intelligence is enabling much faster, more effective capabilities for the technologies to be able to respond, to do those first responder and put those defenses up quicker than a human could themselves.
So, I think if you look at how artificial intelligence can enhance the effectiveness of security controls, you're certainly seeing it in the ability to go through volumes of information, to be able to detect quicker, and then once you have that detection, how quickly can you respond and artificial intelligence can really help in the response efforts.
And what about the use of AI as a component of vulnerability management? Are you seeing much of that? And what are the potential risks that if strong governance is not in place?
Well, I think it's extremely important for any organization to know what their vulnerabilities are. I think if there was anything I would say about the health sector, we don't do a very good job, not necessarily a vulnerability identification, but being able to patch vulnerabilities.
So, it's extremely important that we understand what vulnerabilities that an organization has and are you able to, you know, put a mitigating control in place or ultimately be able to patch that system before a threat actor can can take advantage of that.
And what we're finding with artificial intelligence, the threat actors are also using artificial intelligence to make their attacks more efficient, faster, you know, more capable.
So, you know, we have to do everything in our power to ensure that that we're keeping up with that. And, you know, you mentioned governance, it's extremely important.
Everything about artificial intelligence use is all about being able to trust it and ultimately not only do, you know, do the security teams and the, and the business leaders and clinicians and doctors, they have to trust the information.
And once they trust it, you know, I think one of an important part of it is, you know, will the patients trust it? Will the users trust it?
And everything about artificial intelligence is having good governance in place and ensuring that, you know, you can demonstrate that level of trustworthiness. It is extremely important.
So, Dave, you mentioned the use earlier of AI for threat intelligence. And so, what about the use of AI to improve insights about and to help block potential threats?
How might that be negatively impacted if health care entities are not properly governing this use of AI?
I think from an understanding of the threats, from an overall governance perspective, understanding what those risks are are extremely important.
So, every organization should manage to those risks and having good AI governance is extremely important to ensure that you are assessing the risk of the AI system that you're about to embark on and, you know, knowing then what those risks are and being able to mitigate those are extremely important.
From an overall threat intelligence perspective, I think having an understanding of what the adversary is doing when it comes to artificial intelligence needs to the governance organization needs to have that understanding as well.
Because if you understand, you know, how the threat actors could potentially use artificial intelligence in order to, you know, disrupt cause havoc, just, you know, harm to your systems, certainly helps you evaluate what the risk is of that.
And any part of good governance says that, you know, you, you, you should understand the risk and assume the risk and accept the risk of that particular system that you're about to use that that leverages the artificial intelligence.
And Dave, what about the emerging risks associated with embedding AI into clinical and patient facing environments? What are you most concerned about in that area?
Well, I think it's extremely important for any organization that implements artificial intelligence into their, into their clinical operations should have a really good set of guiding principles.
The first of those guiding principles, you know, just having an understanding of what is the AI doing, you know, are you using it for clinical decision making, what are the processes that you did in order to to validate that the data, you know, is, is sound, it's, it's accurate.
And certainly from, you know, those guiding principle standpoints, ensuring that, you know, hey, are you any autonomous use of artificial intelligence and clinical decision making can be very dangerous.
And, and I think organizations that are implementing artificial intelligence need to, you know, have the processes in place have the understanding that, you know, they've tested the data, they trust it.
So what do you do in the event that, you know, say the nurse finds an error in the clinical summary inside the MR that the AI generated, what do you do, how do you respond to those things. So it's extremely important for organizations to understand how, how they're going to use it.
And then, you know, certainly plan around those adverse events when ultimately they don't, they don't trust the data or what happens if there is a problem or a decision was made and ultimately the artificial intelligence, you know, either gave that decision or or supported that decision.
So everything, once again, I go back to is about trust and ensuring that an organization can trust that that the artificial intelligence is, you know, providing to them value meaning and that the clinicians can use that as a part of their decision support system.
So Dave, we've talked a little bit about governance and also you brought up the important issue of trust.
But what are some of the critical technology or technical considerations in the use of AI in healthcare for non security purposes that if they are overlooked, these issues could contribute to making healthcare entities vulnerable to security incidents and how should we address these issues.
Now, this is actually a, I think, a very challenging question as we look to the future and as more AI adoption comes across the sector.
I made a statement not too long ago and I should maybe trademark the t-shirt that says my artificial intelligence is better than your artificial intelligence.
Once an artificial intelligence system is in use by an organization, some of the risks and threats that are posed by that artificial intelligence may be difficult for a human to to determine and understand.
And I think it's going to take the use of new emerging technologies in order to monitor, you know, those, the models, you know, the outcomes and help organizations, you know, provide to them indicators when the model drifts or it hallucinates or it goes off variance or whatever causes things outside the baseline that, you know, that maybe the human process, you know, won't find.
So as we look at, you know, how do we successfully monitor for, for AI risks, it's going to require technologies.
I went to RSA last year and I think artificial intelligence was, you know, dominated in, you know, by the security industry and, and there were so many companies out there that were, you know, developing systems to be able to not only identify and monitor artificial intelligence.
But, you know, help an organization, provide some control and, you know, really put in measures that, you know, right now, I think are, are some of the biggest fears, you know, what, what happens when you, when you drop data in inside the AI and, you know, you lose control of it and how can you give your organization the tools that they need in order to successfully use artificial intelligence that doesn't introduce.
Harm to the organization or harm to the patients, I think ultimately it's going to come down to, you know, leveraging technology that's, that's either emerging today or, you know, is certainly newer to some folks as they look at, you know, large adoption of artificial intelligence.
And Dave, in terms of those emerging technologies, anything that you see as potentially promising or, you know, things that are interesting to watch or maybe things that are even being piled for that matter.
So, you know, a lot of, you know, really exciting capabilities for, you know, organizations to be able to, you know, in essence, monitor your EMR instance and help an organization determine whether, you know, there is any type of model drift.
So, a lot of great companies that are, that are developing those types of capabilities from a more security standpoint, you know, what can help make the, make the SISO and CIO a little bit more comfortable is, you know, being able to, you know, monitor, you know, use of artificial intelligence by employees to ensure that, you know, data isn't being dropped in that you're following whatever acceptable use policies have been implemented by the organization.
So, you go beyond just the, the administrative control and you're able to implement the technical control that, you know, can help prevent a some user making a mistake and, and doing things inside of artificial intelligence that, you know, that, that you don't want to have happens.
So, monitor great technologies that are out there that are, that, you know, organizations are starting to utilize and, and I can see, you know, more and more adoption of it as we, as we move into 2026.
And finally, Dave, anything else that you're keeping an eye on in healthcare AI use this year that we haven't touched upon and why I'm keeping an eye on, I think ultimately organizations are have started to at least move beyond what I like to say the general knowledge, you know, beyond just, I understand what artificial intelligence is from a general sense.
They're starting to implement, you know, the, the right practices, the right rigor start to wrap their arms around governance.
We are seeing some, some organizations make some, some great strides in, in that area.
What I think is important and I think what will be challenging is there are key stakeholders in the implementation of artificial intelligence that, you know, organizations may not have and, you know, having that ability to once an artificial intelligence system is in use.
You know, to be able to continually validate it, to be able to monitor it.
I think is is going to take, you know, maybe some, some new training, some, some new thought process around what role that is inside an organization.
And I think that's where organizations, you know, need to continue to make big, big leaps and bounds is it's one thing to put the governance around getting the system in.
What do you do after it's in? How, how do you manage the lifecycle of it? And I think, you know, there's still some challenges out there and just having the right resources and trained resources and hopefully be able to help organizations, you know, through that as we move into the year.
Well, thank you so much Dave. I've been speaking today, Bailey. I'm Mary Ann Kolbasek McGee of information security media group. Thanks for joining us.

Banking Information Security Podcast

Banking Information Security Podcast

Banking Information Security Podcast