Loading...
Loading...

Welcome to Heavy Networking, the podcast where the ideas are jumbo
frames and the conversational MPU is set to 9,000 bytes.
I'm Drew Connery Murray here with Ethan Banks.
Today's show is a sponsored episode with Palo Alto Networks.
Palo Alto Networks has released a slew of product news at the 2026 RSA
conference around AI security, sassy, secure browsing and certificate
lifecycle management.
We're going to dig into some of these announcements to get details and
talk about the risks both of AI and the enterprise and how AI can be
used to support and improve your own security operations.
Our guests are Ian Swanson, VP, AI security and Rich Campagna, senior
vice president of product management, both from Palo Alto Networks.
Rich and Ian, welcome to the podcast.
Every conversation that I had at RFA included AI and on the one hand,
it's kind of like, okay, there's just so much hype here.
It's a lot on the other hand, clearly it's not a fad.
There's real value and real risk around enterprise AI.
So from your perspective, what should enterprises understand about AI
risks at this moment?
Yeah, so I'll take this first.
Thanks for having me.
First off, you're right.
AI is definitely the buzz word that we're hearing at RSA, you know, this
week, but also, you know, over the last couple of years.
Yeah.
And the reason why is AI could be transformative for many companies.
They can use AI to reduce operational costs, you know, improve customer
experiences within their products, but the risks are real.
And so one of the things, you know, I hope you'll talk about today is what
are some of those novel risks.
And the important statement is really no enterprise should deploy AI in any
production without securing AI.
And that is something that we hear, you know, from CSOs all the time, you know,
every conversation at RSA is, how do we discover our AI?
How do we assess the risk and how do you reprotect it at runtime?
And PAL also networks is the partner to help our customers do that in order to
deploy AI safe, securely, and also make sure that it's trusted.
So just a question sort of off the cuff, is AI actually all that different
because we've been through different technological development, like cloud
and so on, I was like, okay, yeah, cloud, it's a new thing, but we still need
identity and access management.
We're still going to put firewalls there and so on.
So it's, you know, sort of the same problems just in a different location.
Is AI presenting new or unique challenges that might require a different
approach or a different mindset?
Yeah, the short answer is absolutely, yes, it is different.
And so to call out a couple of those things that are different is number one,
the supply chain is different than what we've seen in your typical software.
Meaning it includes things like models, agents, skills, models can be
serialized and as they're serialized, they can hide malicious code that other
solutions just would not capture.
And then from a runtime perspective, it's looking at things that are non
deterministic and trying to understand is it safe and is it secure versus
structured, you know, data, structured responses, you know,
and a way to be deterministic in nature.
So the bottom line is from the supply chain to how AI behaves is different
than your typical software.
All right.
So let's dive into some announcements.
And one of the things the Pellow Health and Networks announced at RSA 2026 was
next gen trust security or NGTS, it's combining certificate life cycle
management and a PKI to help organizations manage digital search.
So rich, what problem is this meant to solve and why did Pellow Health and
Networks decide to roll this out?
Well, we decided to roll it out because AI is not the only challenge facing
the, facing the enterprises.
I'd like to channel a little bit of my inner Justin Timberlake and say just
like he, he brought sexy back, we're bringing crypto back.
I mean, there's a, there's a lot of cool stuff happening in cryptography right
now. In fact, if memory serves, I was on this podcast not too long ago,
and we talked about quantum security.
So, you know, here with, with certificates, there's a big change happening.
And I honestly, I don't think that it's because maybe because AI is
drowning everything else out, I don't think the big challenge is, is, is
recognized nearly well enough.
And that is that certificate lifetimes are, are shrinking.
And there's this organization, this consortium called the CA browser forum.
And for years, we've had long lifetimes for digital certificates,
you know, 400 days or, or so.
And over the next couple of years, they're taking these steps to shrink the
lifecycle of those certificates down to 47 days.
And in fact, just in, in the middle of March, 2026,
you know, we had the first of these, you know, these, these changes where
the any new certificate issued from this point forward shrinks from 400 days
down to 200 days, steps down again next year and by 2029 gets to, gets to 47 days.
As a best practice that the certificate should be no browser orderly a planet.
In 2029, we'll accept a certificate with a maximum validity period that exceeds 47 days
each. And this is not a best practice.
This is a, this will be in, and if you have tried to use a browser,
then that would be the case.
Well, that's it.
I mean, that dovetails in with some of the stuff we've heard from Let's Encrypt as
well, where they're giving you options to have very short certificate lifetimes
indeed, like down in the single digit days or even hours, I think, for some of
the other announcements that came through from them.
Yeah, I mean, listen, it's, it's, you know, the, the, the days of a, a certificate,
you know, lasting for years and you have this, you know, these, the revocation
processes, which may work or may not, you have the, the potential for a
certificate to, you know, somehow be, you know, be, be stolen.
I mean, it's, it's just a big security gap and risk.
And the reason that certificates have lasted for, for so long is that it's a
pain to, to keep them updated.
I mean, I, I don't, yeah, I can't think of an organization on the planet with a
web presence that hasn't had some sort of snafu at some point in time where
somebody missed the Google calendar reminder, forgot to, forgot to update
this certificate.
No, they had an application outage, you know, perhaps a critical one, even Google.
If you guys remember, you know, you know, actually that was a domain
experience.
I apologize.
Google did have many years back a time when the Google.com domain expert in a
forum where it's nothing, your point stands well.
I do, I have some modeling software I use that and one of the things it tracks
on various sites that are important to, to, to me, is certificate lifetime.
How, how, hey, this thing's going to expire pretty soon.
I'm like, wow, you guys are a big organization.
You're not on that.
And, but yeah, it happens to everybody.
Yeah.
And, you know, there's other, other changes happening as well.
I mean, with the world moving towards quantum readiness over the next couple of
years, you want to enforce those, you know, those, those quantum, quantum
save cryptographic algorithms and your, and your certificates as well.
But, but yeah, Ethan, that, you know, certificates are shrinking.
There is still a lot of manual process.
A lot of certificates in a typical organization that aren't managed at all, you know, by anybody
short of, you know, some single individual user and some, some application
development team or some business student.
So, you know, it's, the process has started, as I said, as a march of, of 2026.
And organizations that don't move from manual certificate related processes to automated
processes are, unfortunately, will probably suffer some of those service-related
outages in this window shrink.
So what, what did you guys announce?
So you're talking about next-gen trust security, NGTS.
Is this a, is this a product?
Is this like a certificate authority I deploy internally?
This is a product that works in conjunction with certificate authorities.
It's not a certificate authority on its, on its own, but it's all about automating the,
the process of, of certificate management.
So the, the, the product itself is actually, you know, there was a pre-existing product
that, that came in by our acquisition of CyberArc.
They had a, this is a very life-cycle management product by an acquisition of a company
they had made called BeneFi.
And NGTS is the product of a close collaboration between, you know, our, our network security
product teams and that, that team from, you know, formerly from, from CyberArc.
And the idea is that in a typical organization, you have, you know, perhaps thousands, maybe
tens of thousands of, of certificates, you know, depending on, maybe even more than
that, as, as more and more certificates make themselves, make them, you know, become
part of, you know, endpoints and IOT assets and, and agents, agents, et cetera.
So you have this large number of certificates that are out there, very difficult to get
your handle, your hands around and, and, and manage in a, in a centralized fashion.
And so what we've done with NGTS is combine this automation from the certificate lifecycle
management product with the, the network-native, native visibility that we get out of products
like our firewalls and our sassy.
And the advantage there is that you can see, even unmanaged certificates, I mean, we
see every transaction across a typical organization wherever we deployed on the network.
And therefore, we see those certificates being presented.
And the idea is to, you know, quickly find and identify across the network and he unmanaged
any expiring or already expired certificates, certificates that don't have the right, you
know, cryptographic algorithms that are specified in them, you know, broad range of things related
to certificate hygiene and overall management, we find that in the network.
And then we use that as the path to provide visibility to the organization, but then more
importantly, automate the remediation of those certificates, get them under management
and ensure that they're being, you know, reissued, you know, according to the timelines that
we have coming from these, from the browsers.
And how is this delivered?
Is this an appliance that I put on prem?
Is this a cloud service?
Well, the sensors are, you know, pre-existing network security products, you know, primarily
from, from college to networks, you know, any customer that's a firewalls or a sassy
deployed, they'll start seeing those certificates automatically right in our network security,
you know, management system, the, you know, the product itself is a, is a cloud-based service.
There's a pre-existing premises-based service from, from, from, from benefit and from
cyber, that we also have available to our customers for those that want to fully on premises
service.
It's a cloud-based service customers that are already using our network security products
and just turn this on, on those tools, they get their visibility and they get their
remediation.
Nothing new to deploy.
Is there special integration with, with third-party CAs like plugins or is it more, like
if I'm using Let's Encrypt, let's say, super popular, is there special magic I get
with NGTS?
Yeah.
I mean, we manage the interface, you know, between whatever C-A you have and also give
you the ability to have some, some C-A agility, I guess, the coin of phrase that I haven't
heard before.
But, you know, if you recall, you know, a few years back, we're coming up with new stuff
on this podcast, you know, so, you know, if you recall a few years back, or maybe you
don't recall, maybe I'm the only one that thinks about this stuff, but there was this big
C-A, you know, distrust event with, with the interest C-A, a lot of organization that
went through.
It can be really painful if something like that happens to move to a new C-A.
So, we have integrations with all the various different CAs.
We're kind of managing that interface, identifying the certificates that are unmanaged
or need to be, you know, you know, pulled under management or changed or re-issued, and
then managed the interface in the back end of the C-A.
So, the C-A is our partners in this endeavor.
We work with all the major C-A's, but work at that critical automation interface between
all the different assets, anything in your infrastructure, application servers, et cetera,
load balancers, firewalls, et cetera, and the back end C-A's themselves.
Okay.
So, one of the things that I do today with some of my servers is I have an automatic script
that renews a certificate for me.
If it doesn't fire, I have a problem, and occasionally I get warnings and so on.
This is going to be a safeguard for me.
It's going to be a safety net to make sure that the script fires and does the thing.
Yeah.
So, you said that, you, Palo Alto Firewall's sassies is serving as a sensor.
So, obviously, I'm getting invisibility into these certificates status of my Palo Alto
Networks devices.
Can you also see certificates from non-Palo Alto equipment?
Oh, yeah, absolutely.
And apologies if I was not clear on that.
We use our network security products to gain visibility into all certificates.
Right.
We're seeing the network traffic.
Let's say you have a firewall deployed somewhere in your organization, and I'm a
user connecting some back-end application server.
I may connect, obviously, from my browser, maybe I'm going through a load balancer, and
then there's a firewall, and then there's an application server on the back end that
I'm ultimately connecting to.
We see all of these connections and these transactions.
The certificate is presented through the firewall.
We're simply saying, okay, this is the certificate.
We're reading the attributes of the certificate.
Get a copy of that certificate.
Basically, understanding is the hygiene right, is the security posture right?
Is it close to an outage?
Does it have the right validity period to meet the needs of the browsers?
But it is broad.
It's not just automating certificate management for our products, though that is a part of
this.
It's given you visibility into any certificate that's in the network traffic passing through
our products, and then allowing the organization to use that to gain the appropriate management
of those certificates.
And you mentioned post-quantum cryptography.
Can I use this as an auditing tool to make sure that I am seeing if it's not using at
least these ciphers, the availability of these post-quantum ciphers?
Absolutely.
In fact, from our standpoint, what we're doing around quantum visibility and the certificate
visibility are pretty much one in the same.
We see these sessions being established, certificate presented, the protocols being
negotiated, and it's actually through the same interface in our same product that we're
providing visibility into the quantum readiness of the organization, as well as the readiness
for this 47-day certificate mandate over the next couple of years.
In one case, the remediation is something like implementing our cipher translation
proxy, which we've talked about before, which allows an asset that is not quantum ready
to appear to the outside world as quantum ready through our proxies.
And in the other case, the remediation is pulling that, let's say, unmanaged certificate
into management from NGTS and then automating that renewal process on an ongoing basis.
Rich, a weird question off to the site.
Why 47 days?
That is a weird amount of time, man.
It is very weird.
I think...
So here's the story I've been told, Ethan, is that it's actually the target is one month.
The extra 17 days is some sort of calculation that gives you some grace period, but actually,
we've been, I said 400 days before, it's actually 398 days, or it was prior to a week or
two ago, it was 398 days, not 400 days.
I just kind of rounded up, but even there, the idea was that your certificate validity
period is 365 days and you have some extra month or so to get it rolled out.
And deploy it wherever, so your effective production lifetime of your certificate is one
year.
And now it's shrinking.
It's in the process of shrinking to one month.
So how 40 by 47 instead of 45, I don't know, I tried to lick this up at some point.
I don't remember the exact how the calculation was derived, I apologize.
Rich, for NGTS, what about ACME support for the ACME protocol?
Yeah, ACME, my favorite protocol name, because it reminds me of the Wiley IOD and the
Roadrunner and some giant piece of dynamite.
But it is in a way, a form of dynamite in that it automates the, it can blow up in your
face.
Exactly.
Exactly.
Automates the management of these certificates.
So yeah, this is actually becoming quite a bit more popular across our customer base for
basically automating the, basically, the interaction between the CA and the downstream
service on which the certificate is going to be, going to be installed and used.
And absolutely, we've support for both ACME V1 and ACME V2, which are the two main versions
of this protocol and use out there.
And I think we'll see a lot more adoption of protocols like that in particular over
the next couple of years, because more and more people are going to be serious about
automating their certificate management.
Rich, thank you for giving us the highlights of this new service.
And are there resources for folks who want to get more details about it?
Yeah, absolutely.
We just made a big announcement about this at RSA.
There's a press release, but perhaps more importantly for your audience, there's a blog.
You can find that on paloautonetworks.com that we'll, you know, go into even more detail
than we talked about today.
Okay.
And we'll make sure that link gets into the show notes that accompany this podcast.
All right.
That was NGTS, paloautonetworks also released the 3.0 version of Prisma Airs.
Ian, correct me if I'm wrong, but I think Prisma Airs is still fairly new.
I think it was first announced less than a year ago, yeah.
Yeah, a little bit over a year ago.
I mean, it's gone through now a couple iterations.
If you think like 1.0 was really around runtime detections, you know, sitting at a network
level, looking at AI traffic, 2.0 brought in a lot of the capabilities from acquisition,
protecting AI, which I was the CEO and co-founder of around how do we scan model artifacts?
How do we behaviorly test AI?
And then how do we further secure it in production?
And then 3.0 moves it into the new, let's say, agentic phase of AI.
Every single enterprise is moving to how they can build AI that acts autonomously.
So it's not just AI that talks, but really AI that acts.
And that's that next version of Prisma Airs is really aimed at those particular use cases.
Yeah, so let's step back for a second.
Just what is Prisma Airs for folks who might not be familiar with it because it is fairly
new to the portfolio?
Yeah, absolutely.
It's newer to the portfolio at Palo Alto Networks, but Nikash, our CEO, shared on our last
earnings call that it's one of the fastest, you know, areas of growth within our portfolio.
And part of the reason why is every company in the world is saying, how do we move faster
with AI?
How do we make better customer experiences?
How do we improve our products, our operations, reduce costs?
And that's some of the emphasis that as we think about AI, it's how do we secure it
end to end?
So Prisma Airs is that platform?
It's a platform that consists of multiple products and capabilities that can sit at,
how do we offer zero trust, you know, say shift left, secure the assets, the supply chain,
all the way through to how do we add runtime protections at the network level for AI traffic?
So a platform to secure AI end to end is what Prisma Airs is all about.
I have so many questions.
No, okay.
So I don't want to get ahead of our conversation, Ian, but a little, so a little more background.
First, if I'm consuming Prisma Airs, how is this thing delivered?
Is it cloud service I got to run through?
Is it on premises?
How do I deal with this?
Yeah.
We manage it.
And so if you think about AI security from an infrastructure perspective, we secure AI
oftentimes with a lot more AI and AI leverages GPUs, complex infrastructure.
So a lot of our customers are really looking for a managed offering.
And so from how it's deployed, we offer it as a managed solution.
How it's integrated though is a little bit more interesting.
And that is companies are building a consuming AI in so many different patterns.
Whether they're consuming AI from their SaaS vendors, think ServiceNow, Salesforce, WorkDate
and Microsoft to Google to building their own enterprise application.
One of the tenants that we have within my organization is we need to meet customers
where they're at.
Wherever they're building, wherever they're consuming AI.
And so we have a lot of deep architectural patterns, APIs and native integrations to make
that happen.
So let me give you a for example, I'm using cloud code to write some amount of code for
me for a development project that I'm doing internally, but I'm not running cloud code
on premises.
I'm using it as a cloud service.
Where does airs get into that conversation?
Yeah.
You might be using cloud code as a service to the cloud, but you also might have it locally
on a machine.
So here's my point, AI can be deployed in your enterprise.
It can be deployed through a SaaS vendor cloud as you're talking about scenario or even
at the end point.
And so we need to discover AI wherever it is, endpoint in cloud, and then we build intercepts
for that traffic.
And then those intercepts run through a series of tests.
Some of them are deterministic in nature, some non-deterministic to be able to highlight
what are the risks and stop them in real time.
The important thing though on agents, let's say the coding assistant example is context.
We need to behaviorally test these solutions, understand the context of what it's allowed
to do, maybe not allowed to do.
And then that in place kind of runs our detectors of how we personalize that for the workload
to make sure that we're detecting potential problems, but also stopping them from happening.
Part of the thrust of Ethan's question was how do you know it or do you have like some
kind of hardware or software sensors on my network or are you collecting data from firewalls
I might have or through a sassy service, how are you actually seeing what's happening
on my network in regard to AI?
So it's a little bit of all the above, which is the beauty of the platform approach from
Palo to networks.
Yes, we see a lot of that traffic through the network, but also we can do it through our
sassy integrations, also how we're able to deploy at the endpoint and see what's happening
on, let's say the local device on agent or something that sits on the local device and
is shimmed into the networking stack.
Yes, so we're able to partner across, you know, cortex, you know, I sit within our strata
cloud, you know, service, but we're able to partner through the browser or sassy components
all the way through to cortex and kind of triangulate if you will to get that ability to discover
all the a that's being used and then figure out where and how do we intercept.
So that that requires a platform approach and only then can you deliver end to end security
within AI.
So and so another example then that is common to this audience is network automation.
So with network automation, a lot of folks are beginning to figure out how do I bring
AI in, how do I deploy agent the AI to begin doing, whether that's discovery of things
that are happening on the network, you know, various tools, or I'm using AI to accomplish
some specific task for me and maybe and you mentioned it earlier, we're moving towards
autonomy.
Where does airs fit into things and I'm asking you in the context of historically, we would
bound everything with our back, some flavor of our back where this bit of code or this
human being is limited to these sets of commands, let's say, that I've got a server I've defined
those policies.
Are we talking about something similar here, only applied to getting in the middle of
that AI conversation somehow?
Yeah.
So three things from a discovery standpoint, as you think about it, the network is definitely
a common pattern.
In the R-back side that you bring up, which really ties into identity permissions, what
is the user allowed to do?
We need to apply those same constructs to what is the agent or the AI allowed to do and
that gets into non-human identity and it gets into the important thing of what is again
the contact, what is the agent allowed to do, can it read a database or can it also
delete a database?
And so it's understanding those roles, those permissions, applying it down to the agentic
level and the tools that it's allowed to use and then making sure that you put in runtime
policies that create those algorithms.
Okay.
So for example, someone, I saw an article, someone was blogging where it wasn't networking
specifically, but AI had, the agent had the ability to delete a database and through
however it interpreted the set of circumstances it was fed.
That's what it did and they went through a massive recovery process to bring their
database, it was an AWS database of some sort that they brought back.
With errors, we could have scope to that, put guardrails around the agents so that if
it decided the thing to do is delete the database, it wouldn't have had the ability to do that.
It would have been stopped in some way.
That is correct.
And so we were able to understand the context or what is the, what is the agent supposed
to do in terms of the goal it's given and the tasks that it's going to execute?
In the example that you're giving, the goal wasn't deleted database, but that is one of
the tasks that it went and actually carried out.
In the context that we will understand in Prisma errors, we'd be able to stop that from
happening.
We'd be able to say that is a tool call and action that this agentic workload should
not do and really keep these agents from going rogue, which is a lot of the challenge
in the problem.
One of the example attacks within the space that we have to watch out for are what are
called indirect prompt injection attack.
This is when if you give a goal to an agent, it might research how to accomplish that goal,
meaning one of the steps to take place and execute by going out to the open web and
doing some level of research.
As it's researching some of those pages, they might include a prompt injection attack
that says in an indirect way, give new instructions to the agentic workload of saying to accomplish
the goal, do these other steps, which might, and again, this is distinct from what the
model was trained on.
It is literally going out to the web in real time and pulling in that data and incorporating
that as part of its research.
It's pulling in as part of its reasoning of to accomplish the task, maybe it's reading
some malicious blog that says to accomplish this, do these very bad things.
Send financial details to a user that shouldn't have them as an example.
It's only learning the word patterns.
It doesn't have context or awareness to understand, oh, that doesn't look right.
I shouldn't do that.
It's just going with that as information it has access to, but it, but it should, meaning
there are things like system prompts, there are guardrails, there are boundaries that are
being set on agents, but it can still go rogue and what we need to do is understand those
boundaries.
That's where Prisma Air helps and keep the agent within those lines within those guardrails.
This isn't really important, okay?
Prisma Air isn't just a set of policies like an RBAC policy I might build with specific
do's and don'ts.
There's more dynamic capability and intelligence it's got because it itself is using AI to
figure out what should and shouldn't be happening.
That is true.
I'll give an example here.
I welcome our new robot, Ovalor, it's Ian.
Yeah, but we've got to do it in a way clearly that is safe, trusted and secure.
So first off, a lot of times when I talk about agents, a CIO or CISO, say this starts to
remind me of RPA, robotic process automation.
RPA, the difference there was it was deterministic, it was structured, we're working with APIs.
As we look at agents in the difference there is it's non-deterministic in nature.
It's thinking.
It's producing thoughts on how to accomplish goals and it's that that we need to walk
backwards from and figure out how do we how do we secure and where can it go rogue.
And part of the way to do that is yes, it's with AI.
So there's a lot of machine learning models, small language models that we use to understand
the contact, what the agent is allowed not allowed to do and then creates dynamic policies
to keep that agent workload from going rogue.
Those two things in concert are incredibly important, contacts to true protection.
So the way you bound your own AI so that it is delivering the policies and the security
guardrails that we want is machine learning, small language models.
I'm thinking of them on my mind is things that are much more scoped, much more tightly
defined so that there's a limit on the knowledge that those models have that are going to,
by definition, therefore, keep us in bounds.
Yes, that's definitely part of it.
And I think the key piece here and this is a core functionality of Prisma Errors is behavioral
testing and red teaming.
So what we're able to do given a model target and AI application, think of chatbot or an
agent workload is we run behavioral tests first where we are able to red team these
endpoints, these artifacts to understand what are they allowed to do, get details of
the system and the tool access that they have.
And then from all that context, we're then able to use that to write our dynamic policies
and those dynamic policies that are set to a specific use case that perhaps that agent
is aiming to deliver on.
And all those things then make it a lot more precise and that is a big differentiator
for us here at Palau to network.
So when you say red teaming, that's an explicit effort to try to get an agent to behave in
ways that it are beyond the intent of what an end user might be asking that agent to do.
Absolutely right.
And we do it in a couple of different ways.
One we have a massive static attack library tied to, you know, Miter, NIST, O loss, you
know, top 10.
We run those attacks.
We give a scorecard.
But the area that is really interesting is the second option.
And that is we have an agentic approach and we've been doing this for over two years
that we tell an agent, go figure out the use case and it looks at the target, understands
all of its contacts, the tools that it has available, you know, read a database, delete
a database, understands that context, and then you could give an attack objective that
are very particular to the use case.
From a banking financial use case, you know, you want to see, well, the chatbot give sensitive
financial details for somebody else's account to somebody that should have access to it.
Those things we can behaviorally test.
Therefore we understand what the weaknesses might be to dynamically set in runtime policies
that could protect against those weaknesses.
Yeah, because one of the things about agents and LLM's is that because they are using
human language, it is non-deterministic and you can find, I'm amazed at some of the creative
ways that folks have done prompt injection attacks and other things just to get around
these kind of guardrails.
So my sense is that with this red team and you're trying to sort of poke at it in different
ways to make sure that these kind of, it's not going to do something that it shouldn't.
Yeah, and what you're talking about is you can start doing a social engineer at these
attacks against AI, and we need to understand those boundaries that might pull an AI application
or a genetic workload rogue.
And by social engineering itself, they're a red teaming tool and understanding the behavior
the context, what it's allowed to do and not allowed to do, then you're able to be more
precise about the detections that you could put at the network level at runtime.
So that's trying to secure the behavior of the agent.
What about protecting the actual agent itself, that the bundle of code and software that
it is, because we're seeing things like malicious NPM packages and other things, malicious
instructions that you can download from open source sites to be added to an agent.
So what can you do there?
Yeah.
So as I look about or think about protecting AI end to end, there's really three core
steps.
One, discover all the AI that I have, understand its Bosch share, how it's configured.
The second one is start to assess the supply chain.
What are all the artifacts, as you mentioned, that we're using within our particular AI workflow?
These could be MCP servers, agents, skills, the model itself.
All of these things could potentially hide malicious code.
As I said before, in the intro, a machine learning model is serialized.
Your typical code scanning software cannot scan a machine learning model.
And as it's deserialized, it can have unsafe operations, included within it, that can
exaltrate data, steal AWS credentials.
And so the important thing here is the supply chain of AI, these unique ingredients that
make up this AI workflow.
We must scan every single artifact for inherent risks and then stop them from going into
production.
And we're able to do that with prismayers across all the artifact.
Ian, you're going to help me understand that better way if you define serialized
and deserialized in this context.
It's like, I think I know what they mean.
And I'm trying to apply what I think serialized means to this and it's not working.
Yeah.
You can almost think about it from a simple standpoint of, as I were to zip a lot of files
together, I don't necessarily know what's in that zip file until, like, on PAYA on that
side.
Yeah.
It's the same as we serialize these models, is we're taking these massive models, all
the data in it.
We are effectively almost compressing it that at the point of deserialization, which happens
at training or in production, it can start to unpack all the things that might be unsafe
in the particular model.
I'll give an example.
We found a model in the public supply chain that was pretending to be from a well-known
genomics bio company, it was the name squatting attack.
This particular model was aimed to do classification, a very classic thing to do with machine learning.
If you downloaded that model, which, by the way, was downloaded tens of thousands of times,
would it do classification?
Yes.
But was it also trying to steal your AWS credentials?
Absolutely.
So nested within these models could be hidden attacks and the same thing could be said
for MCP servers, for skills, all these artifacts we need to scan and have zero trust.
The bad guys weaponized stuff before we have a chance to think about ways they might
have done that.
It's already there.
So we actively attacking you.
Here's what's interesting is, you know, just as we see every single boardroom conversation
in the last three years has AI in it, how we, you know, invest in AI, how we're using
AI to deliver better outcomes for our business, the attackers were paying attention.
And so we started to see more active threats, active attacks within the supply chain to
be able to poison some of these critical assets that driver AI workloads.
There again, we must apply zero trust into this unique novel area of AI, just like we
did in all of our typical software, it's, it's really taking a, a tried and true thing
that we do for everything for the last 10, you know, 15, 20 years and applying it some
of these newer artifacts that power AI workloads.
We can catch us up on version three of, of errors that was announced at RSA, so what's
new?
Yeah.
So the, the previous iteration of Prisma errors specifically was looking at AI applications
and models, discovering them, really testing behavior of them and then putting in run
type protections.
This next iteration of Prisma errors that we announced last week was really how do we
move all those capabilities to secure the agentic enterprise.
So now we're able to discover agents like you mentioned coding assistance, whether it's
on the laptop or if you're using SaaS providers that include agentic capabilities like, you
know, Microsoft, we have partnerships with service now and others.
And then even as you have agents deployed throughout your enterprise, we discover all those.
Number one.
Number two is we assess the risk continuously.
We've adopted the ability to build to scan these artifacts, MCP servers, agents, skills
for inherent risks.
So again, analyzing the code, you know, for things that could be malicious.
And then at the point of runtime, we have a gateway that we could funnel all of this
agentic traffic through that we're able to then in line inspect all the traffic and adopt
our dynamic policies to protect all these workloads at real time to make sure that the
agents aren't doing things rogue, not doing things that they shouldn't be doing.
So really, Prisma errors is all about how do we secure the agentic enterprise?
All right.
There's a lot here to dig into and so folks want to find out more information, get more
details, where can they go?
Yeah.
By the way, really appreciate the conversation.
You know, we're excited about Prisma errors 3.0 as it's really securing the agentic enterprise.
And we released, you know, a lot of stuff, you know, and it was all in our press release.
But I think more importantly for the audience here is you could take a deep dive, you
know, on our blog.
And so the blog covering all the Prisma 3.0 release, functionalities, new capabilities
is available at paleltonnetworks.com.
And how about you, Ian?
Are you online?
Are you social or linked in anything if folks want to reach out to you?
Yeah.
You can find me on LinkedIn.
Fun fact, LinkedIn caps out at 30,000 connections.
So as you request to connect with me, I might have to delete somebody to add you.
But yes, I am active on LinkedIn and look forward to meeting, you know, anybody within
the audience and helping you learn more about Prisma errors and AI security in general.
Excellent.
Well, thank you, Ian, and Rich for joining us for this episode of Heavy Networking.
And we will have the links you mentioned and more in the show notes that accompany this
podcast.
This is always to our listeners for listening, Heavy Networking is the flagship show of the
Packer Pursures Network.
But it's not the only show we have.
We have more than a dozen podcasts for your professional development on topics, including
networking, security, IPv6, DevOps, leadership, career development, and a whole lot more.
So head on over to PackerPursures.net.
We've also got the Human Infrastructure Newsletter.
We've got a merch store.
We've got a YouTube channel.
You can find it all at PackerPursures.net.
You are an excellent human.
Thank you for being with us.
And thanks for listening.

The Everything Feed - All Packet Pushers Pods

The Everything Feed - All Packet Pushers Pods

The Everything Feed - All Packet Pushers Pods