Loading...
Loading...

Please consider supporting the DefSec podcast here.
Here are the links we discuss this week:
[♪ OUTRO MUSIC PLAYING [♪
Welcome to episode 341 of the Defensive Security Packets.
My name is Jerry Bell, and joining me today is always Mr. Andrew Kallett.
Good afternoon.
I'm good.
How are you, sir?
I'm okay.
I'm okay.
I'm working again, so yeah.
Like John Wick, working again, that kind of working again?
Yeah, exactly, exactly right.
Well, welcome back to the world of the gainfully employed, and I'm sorry.
You know, I feel like, especially in this economy, I should be super happy and, you know,
to have a job like, I like the people that I'm working with, and it's a good company,
but I am not going to lie, I'm mourning my freedom, just, it's a, it's a different
thing.
So tell, anyway, you should have won the lottery, yes, exactly, exactly.
I'm going to work on that strategy.
I hear you.
I think, I think all of us can, can relate.
So anyway, onward and upward, someday, I will retire and I won't be, won't be going
back to work again, so today is not that, today is not that day, and neither is tomorrow
morning.
Okay, so first off, I do want to extend a sincere thank you to our Patreon sponsors.
Thank you very, very much and offer a reminder that if you would like to support our little
endeavor here, you will get the high honor of being able to listen to our episodes a
week early, week before the unwashed masses.
Thank you very much for supporting us and you are awesome.
I agree.
Sorry, I had to get my mute off.
Thank you all of you who contribute as we were just saying, hard earned dollars to us.
That's humbling and I hope we make it worth it.
Absolutely.
And then a little unrelated, but certainly more impactful now for me is that the thoughts
and opinions we express on the show are ours and not those of our employers.
So there you go.
Fair enough.
I will say, by the way, I am now customer facing, which is the first time in my career
that I have done that.
So it's like a big step forward for me.
I thought it would be a lot more jarring than it is, but I'm kind of liking it.
But this whole opinions thing certainly, I can see how that could come around to bite
you.
Love it.
Do we have to put a bunch of claimers about who you're talking to or a bunch of things
like we have to avoid now?
No, no, I mean, obviously, I will just avoid any no-fly zones.
That's fair.
It's an interesting challenge.
Absolutely.
All right.
Just wait until your customers find it and go, were you talking about us in episode blah,
blah, blah, blah, blah?
Believe me, believe me.
That wasn't you.
It's going to happen.
It's always interesting, by the way, and this has happened over the years when I'll get
on a meeting and somebody will recognize my voice.
Where do I know you from?
America's most wanted.
Yeah, and then there's obviously the ones that are like, hey, you should have me on your
show.
I get that.
Well, we did have an interview show for a minute that sort of could still happen.
Could still happen.
Yep.
We started, and then we stopped.
So the first story we have today comes from Bleeping Computer, and the title here is
Amazon, AI-assisted hacker breached, let me try that one again, AI-Amazon, is an Amazon
saying AI-assisted hacker breached 600 Fortinet Firewalls in five weeks.
So this is making the rounds, and I think it's, I don't want to call it disingenuous
because it's not, it's, it's, it's, it's, I perhaps a little confusing, right?
So, so the story here is that as best I can tell, Amazon and several other security research
firms came across the command and control infrastructure for a threat actor.
And I think that this, at least part of their infrastructure was hosted on Amazon or AWS,
and that's how they came to become aware of this.
But this, this particular threat actor was doing the thing they were exploiting the problem
that I hate most in this world, which is Fortinet Firewalls with their control panels exposed
to the internet, and of course, using, you know, all the things in the world, that's
what you hate most.
Well, it's up there.
It may be not most, but it's, you know, it's probably in the top three.
So compared to like a sweet instant child dying of cancer, you're like, Fortinet management
interface exposed to the internet still worse.
Well, you know, you're like a big wet blanket sometimes.
Am I, am I list anyway?
So in the cyber world, this is what you hate, sorry, I warned you, I warned you before
the show.
I was feeling spicy today, have it a rough week.
You're right.
I'm sorry.
It's all good.
Jerry's just taking children, turn off the interface in the security world.
Anyhow, there's apparently not much in terms of AI going on with the initial compromise,
right?
They're using basically, they're credential stuffing, right?
So they're, they're, we don't exactly know how they're picking credentials to use,
but we know that they're picking these, these Fortinet firewalls as targets of opportunity.
Like they're not, they're not targeting specific victims.
They're just looking for installs that are exposed to the internet and then they're
stuffing credentials and getting lucky, apparently, 600 times in five weeks.
Now where the AI comes in is, is kind of post exploitation, they're using AI to analyze
the data that they've extracted from the Fortinet firewall, which includes things like
the topology of the internal network, you know, IP arranges, credentials that are stored
on the device, which might be useful, might be used somewhere else inside the, the, the
victim network.
And then they used some kind of vibe coded assessment tools to, to, to look to scan the
internal network, looking for as they describe, you know, open SMB shares and figuring out
how to deploy some pretty basic tools like, you know, a interpreter and using Mimicats.
So none of this is like a, in terms of attack techniques is novel.
They're basically using AI to automate, as far as I can tell, kind of at scale, automate
things that your average pen tester wouldn't know how to do in their sleep.
They're not creating zero days here.
They're just, you know, they're automating what you would, what you would see in a, kind
of a basic, a pen test again.
But, but the interesting thing is there's a little bit, it's, it's certainly unclear.
It does feel like the person who's a person or people who are behind this might not have had
the level of sophistication to do this on their own, like even, even at a, in a smaller volume,
right? And so I think, I think that's the, the thing that was notable to me.
We've talked about this for some time, you know, that the AI is going to create opportunities
for people who have a desire to commit cybercrime that might not otherwise have the knowledge and
capability to. But what's, you know, what's interesting about this story is that like it's,
it's that plus it's actually happening, it's allowing them to do it at scale, like 600,
compromising 600 devices in five weeks is pretty aggressive, like pretty productive, high
productivity. And, and, and from the article, if I recall, it was not targeted. It was just targets
of opportunity that they were broadly scanning for, I think, across like 55 countries. Yeah.
Yeah, you hit on the notes I had too, which is making it more sophisticated tax or more
impactful attacks easier for less skilled attackers, which means you're just going to have more
attackers out there. But you know, it's thinking it doesn't dramatically change our problem much.
It just creates more attackers using like, there's, there's not necessarily AI magic here. Like,
it's not like the TV shows where we're just going to throw AI at the firewall and breakthrough.
The fundamental vulnerability still need to exist for the AI to exploit them or for the AI to
know how to explore like the same set of problems exist. AI is just enabling less skilled people
to attack them without having to build that skill set on their own. They're outsourcing that
ability to attack or methodology to look for. And of course, the inevitable next step is, well,
hey, AI providers, why aren't you limiting this? Where are your guardrails from this? Well, I think
that that genie's already left on the barn to mix metaphors because we've got a lot of private
LM's that you can load that are nearly just as powerful that don't have to have any his guardrails
or you can strip them away and you're on private little labs. So I think that's not going to be one
we can we can stop at this point. Oh, 100%. I think what's, you know, what what is interesting or
perhaps where where where this is all going in my view is creating a similar perhaps
you change that we saw with ransomware or ransomware prior to ransomware, you know, you you
had some solace in knowing that you weren't an important target like you were you were a nobody
your your organization was was not controversial. It didn't process lots of money, whatever.
But after ransomware came on the scene, it didn't matter anymore. The only thing that mattered was
that you had something that was exposed and they you know the threat actors were able to find you
and and then they you know they they've they had kind of well worn ways of monetizing a breach that
didn't rely on you having certain types of intellectual property or or what have you. So I think
this is going to create very likely a similar kind of shift in the industry where
because there's going to be perhaps so many threat actors operating at a at a higher level of
sophistication than we've seen in the past. The the likelihood that we're going to see
successful exploitation of kind of anything that is left open or exposed is is going to go up
significantly. The other thing that I was thinking of it when I read this was we've seen the stats
and I don't remember them off the top of Mayhead but like there's a very small percentage of
vulnerabilities that are you know in a given year that are discovered and then used in a successful
attack. Like it's it's just a couple of percent even even the ones by the way where there's
proof of concept code which is interesting to me. But I also wonder like if if this is going
to change that I mean it might not change overnight you know like now you you know you have this
this engine that allows that allows an attacker to kind of enumerate what's exposed and obviously
it knows what kinds of vulnerabilities exist in what exploit code exists in it. I think it might
have the opportunity you know it might have the ability to pair things up better for
a much larger swath of threat actors than we've seen in the past. But my net concern is that we're
going to see a lot more attacks being successful by a broader set of adversaries who might
themselves be not sophisticated but the attacks that they're launching look and feel sophisticated.
Yeah I would agree I mean the one potential upside is blue teamers can do the same thing against
their own infrastructure and leverage the same tools to find out what the bad guys like
these tools are exclusive to the bad guys we just need to have the time to cycle the budget
and the skill set to do it as well. So absolutely. And you know maybe without the
leaking all the data by accident part. That is the bad part yes I agree.
Causing you know a 12-hour outage on our environment by accident you know those sorts of things.
All right so moving on to our next story. This one comes from the register and the title here is
open source registries don't have enough money to implement basic security.
This sounds like a topic we've talked about a few times change mentally right you know this sort
of problem. It was it was interesting to see this laid out in print right like you know intuitively
that that that is the issue but just just to kind of back up for a second like there's a lot of
of repositories that that exist you know whether it's npm you know I think everybody's
familiar with GitHub that's not technically open source it's got a you know company behind it
but there's a you know there's a bunch of them a lot of them are you know that they're not
organization run and the point here is you know those things are being used pretty extensively
in a lot of contemporary attacks and people like us are sitting here saying like gosh they're
gonna have to do something to improve their security because like they're they're you know they're
being used as the getaway vehicle to rob banks and like maybe maybe they should get on that but the
point here is that you know a lot of these repositories are themselves kind of quote open source not
you know not just the software that they're running on but like they're being run by volunteers
they're being run using donations and this this is the article is the product of a report by a
company that looked at the operations behind some of these big code repositories and you know
basically like they're using all of the resources they have to keep the lights on to like the
pay for bandwidth and storage and and that you know it's not prominently called out in the article
but I have to believe that the the the impact of AI is probably also putting a severe strain on them
because like I have to believe there's you know immense numbers of new repositories being created and
and you know pull requests and issues created and whatnot that are all consuming resources in these
these repositories so I think the problem is getting worse not better. Yep and to that point we're
also seeing although it doesn't necessarily directly maybe impact that the owners of the repositories
AI is also contributing a crap ton of slop in terms of security reports that are going to the
to the maintainers of some of the repos or projects on these repos as well which is
tangential to your AI causing more load not less sort of conversation for sure and it's interesting
it almost feels like kind of the point of the story that I that I got of is hey we know we have a
lot of security challenges we have zero available income to cover it and here's all the ways that we
looked at to generate that income and they all suck and they all have problems and they all like
don't work so anybody get any ideas how we can generate money for this service we're providing for
free basically and I think it's a fair ask it feels a little bit like a broken incentive cycle
and it almost for my know it's I had it for later and deeper in the article but this almost
feels like it's a demand problem not not a the consumers of this information are demanding it for
free and aren't willing to pay for it and as they argue in the in the article if we did try to
set up some sort of paywall somebody would just go around us and set up another one for free
and perpetuate the problem so it's this weird incentive model that doesn't seem to scale well
and I think we're starting to see some real cracks in that armor as a result that I think ultimately
sorry just a jump to the end I think ultimately the only way this gets better is when enterprises
who are consuming this for their own use demand a different model and are willing to pay for it
mm-hmm and I don't know when that's going to happen yeah I don't I don't know either and and I
think that that itself also has problems because then you know if if a for pay model is what emerges
out of this you know you you're perhaps creating a wall the jump over for other open source projects
that they just they can't afford themselves right there's there's not a great solution here
in the article they actually talk about different
monetization models you leave the article like gosh you're thinking gosh there's just not a like
you can't close this this gap and so one of the I do want to read one specific thing to give
a sense of scale they're talking about pi pi the the python repository let this is a I'll
quote it in some benevolent in some cases benevolent parties can cover these bills pythons
pi pi registry bandwidth needs for shipping copies of its 700,000 plus packages
amounting to 747 petabytes annually at a sustained rate of 189 gigabits
are written are underwritten by fastly for instance otherwise the project would have to pony up
about 1.8 million dollars per month that's wow that is off the hook so I I do want us to stop and say
like fastly is amazing they're they're super generous they have an open source program that
that pi pi uses I actually use it also for for infrastructure that exchange
I might build wouldn't be 1.8 million dollars but it would be like $8,000 a month right which is
a lot right and and they point out in here that there is this kind of conflation between
open source and open source infrastructure and I feel that deeply because I run you know mastodon
is the software that I use it's open source but I have to run it on servers like my server bill
between servers and storage is close to 3,000 bucks a month right donations cover that but
like that's a that's a lot of money to run free software yeah and all the moderation headaches that
come with it yes I've learned a lot about humanity in the process of moderating that and yet
you're still relatively sane I'm surprised I have have a group of very competent and dedicated
moderators that help me and so we we kind of run Robin and I honestly don't say very competent
dedicated therapists but that's good too I'm glad that you've got help to go off on it off on
it tangent I have I have little you know there's there's like moderation communities and including
for my own team and you know one of the one of the running jokes is like there at some point there's
going to be therapists who specialize in treating people who moderate online communities like there
has to be because it's it's awful like I don't know how to I'm not going to sugar code it like
it's awful people are terrible yeah well I know we're making light of this but I'll get very serious
from it like the folks who deal with like child sexual supportive material like have serious mental
trauma but I can understand probably need some very serious mental health support to to get through
that job so absolutely I think it's a real real problem and yeah people are you know what help AI
I think AI could solve this but then some day we're going to struggle with the problem of AI
having emotional problems from being exposed to maybe we'll just give an AI therapist it'll be
fine find me a problem they can't be solved with AI that's what I'm saying I can't I'm not there
yet as look I'm listening to my my executive leaders that's that's what they're telling me all
things through AI all right rubs the AI anyway sorry back on the last hour for just a minute I feel
sure guys it's a tough job and we've talked a lot about how their limitations are also getting damaged
and hurt by them being used as a staging area and they're trusting abused to distribute
attacks malware whatever it is it's a tough problem I think do you think it probably is going to have
to change I think it cannot continue on the path it's going I just know what that change looks like
it yeah I I don't either but it will I'll have to change we have seen one of the one of the points
of this article is they kind of break down where the where the donations go for these
for some of these big volunteer run repositories and you know the bandwidth is the biggest cost but
they're very very little money if any that's dedicated to like proactive security development and
that that's the core challenge but even so like we have seen some improvements you know MPM
you would send me an article about some improvements the MPM made to their security posture less like
last fall you know where they're they're they're changing how how credentials work and entitlements so
like they're they're trying but it's you know they have they have a lot of kind of structural
challenges to work through not not of their own making but kind of due to their own success
and they have to kind of overcome the inertia and figure out you know on the cheap how do we
how do we make improvements to deal with kind of the problem de jour and and they also talk
this is something that that you know I I hadn't really contemplated a lot but a lot of their
donations come from very few organizations and so they they highlight the the challenge like if
one or you know if one even one of the donors that are behind some of these repositories were to
pull out they would be up a creek and so you know this is this reminds me of that you know
of the very pervasive and probably now at this point like ancient cartoon where you get the
stack of blocks and the one tiny little little peg off to the side that's supporting the entire
thing and you know that is these repositories and you know it's it's a it's a weak point in the
ecosystem again all credit to the people who do it yeah this is not a not a criticism of them
like they're amazing what they pull off but the deck is stacked against them clearly the ghost
government should help they need to take it over that's the only way I was I thought you were
going to go with AI well isn't the government being run by AI at this point I sometimes I wonder
if it might work better if it were I don't know literally countless sci fi stories pondering I don't
know I don't even know anymore I'm sorry I told you I'm I'm spicy today we should just move on
all right so our next story comes from bleeping computer the title here is info stealer
malware found stealing open claw secrets for the first time now I think last episode we talked a
bit about how there were quite a few malicious skills being posted into the open claw skill repository
I guess so this is this you know a a a specific case of one of those that is has been observed
actually stealing and basically everything all of the secret information the public and private
keys the you know the configuration info information this this company Hudson Raccoon did this
analysis is determined from their assessment victims again depending on what they've put in
have provided everything that would be required to you know basically steal their identity or
that they're at least the identity of their of their AI agent which is not awesome because
this is a freight train roll down the tracks and you know what we're kind of building trust
around these some of these digital agents this is really explicitly but it's happening and so
this is this is a little I think a little concerning but the one thing the the reason I wanted to
include this because I had a joke it's well it's not really a joke but but a funny comment like
I think where we have to get to now is we're going to have to have security awareness training
for our AI agents you might be right so how quickly can an AI ignore security awareness training
versus a human ignoring it it's gonna just have it playing in the background while it's off right
vibe coding or something clicking through the slides iterating with the guesses on the quizzes
until it gets it right right exactly I'm 100% so you know what the concern is while I thought
that's very funny I do think that especially your kind of corporate use of these tools
without some kind of structured common approach is just going to you know it's it's going to
continue to cause problems now think about this issue in the context of an organization right where
perhaps you have you know you have your get hub keys in there or something like that something
that is sensitive that allows access to deeper deeper in your infrastructure and threat actors
are able to clean that out because you know it for employees who are using open claw or equivalent
are inadvertently downloading these malicious skills that are turning around and cleaning house
as we've talked about in the past the challenges slowing that down is at the moment like one of the
most unpopular political things you can do yep I mean the the way that I'm approaching it is trying
to nudge it towards safer tools that do the same thing but boy that is tough it's evolving so
quickly and you know there's this incentive to use the latest and greatest or get left behind
that is very strong in most organizations right now and to say hey use this tool from vendor
X that we have a data privacy agreement with and a contract with yeah I know it's not as cool in
this shiny and as nice as the the new hotness that you just heard about online but it's safer
well I don't care about safer I care about the best I care about the fastest I care about
what's most relevant I care about the thing that my competitors doing that's a real problem the
counter that is a real challenge you need a lot of executive backbone and alignment to not
go down the path of the new shiny I think for a lot of a lot of security teams it's just not
going to be a surmountable problem and I think it's going to be on us to make sure that the
business is going into this with their eyes open on the potential risk and you know
everything you do in business is a risk right like you hire higher new employee that's a risk you
you launch a new product that's a risk it's another it's another risk and the business has
at best the business needs to understand that there is a risk and and they choose to accept it
that you know I think our our job is to help and mitigate the the impacts as best we can
this is something we said in the last year too I think it goes to are we adequately explaining
the risk are we informing leadership well enough to understand the risk they're accepting
I don't think so and and I'll tell you why because I think please a lot of I think a lot of security
people and I'm not going to use the word Luddite right because I don't think that's the right
term for this but I think there's a lot of kind of tacit rejection of AI in the security
community or at least in segments of it and in that way you know if if you are among that
group you probably don't have the ability to clearly articulate what the actual problems are
right like your your response is going to be it's bad it's you know it's it's it's unethical it
is bad for the environment and well okay that's great but like tell me something about how it's
going to hurt my business and that's what they're really looking for and if you if you don't have
a deep understanding of the technology because you've kept it at arms length I think you're at a
big disadvantage in helping your business understand you know and articulate and comprehend
what those risks are and so that's why I think I've said it for a long time we've talked about
this many times the people who are going to be successful in this business are going to be the
ones who are very fluent in AI and it's not necessarily because like they're the ones they're
they're going to be deploying AI but like you have to understand like you have to understand cloud
to be able to articulate what the cloud risks the risks of operating cloud are you have to understand
firewalls do you know like you have to understand the technology that is a like a very
fundamental attribute of a security person who's operating the business so if if we're choosing
not to do that we're kind of limiting our ability to to work I think yeah that's fair I would say
I don't think security people are a lot I think they're incentivized to care about risk
and business leaders are incentivized to care about possibility and opportunity and those are
intention as they should be I think we're just not very good at understanding and articulating
risks around this area yet and the promise and the opportunities sound very compelling to the
point where a lot of executives go that's fine we'll take the risk we'll see what happens
there is no risk as you're a way to do this and frankly best I can tell all my competitors
are doing it so I damn well better do it as well or I'm going to be left behind
yes yes and you know I don't know that they're necessarily wrong in thinking that a lot of there's
a lot of productive use like I think there's a lot of BS use and and that BS is going to fall
by the wayside you know there there are there are I will say I think I think a lot of the
I think a lot of the use of AI is kind of below the line of consciousness and it's not the right
word right but like we're we we keep seeing big headlines about how many how many investments
in into into into AI don't return results but like that's those are typically like big initiatives
and I'm not surprised about that because like how many big IT initiatives of any kind
ever return the results that they were they were promised so like I I'm not sure that that should
be a big surprise but I at the same time I also think like that there are there is a this is
first-hand experience from working with customers now like there's a lot of kind of
below the radar use of AI to drive productivity yeah it's like we don't talk about excel
projects anymore they're just there well one thing that I do think about is I think
people don't realize this but when Spielberg wrote jaws based on the book it was about AI risk
right and so if you think about it the mayor saying look you guys do whatever you need to do just
secure those beaches but those beaches are going to be open on the 4th of July because our town
relies on the money of those beaches being open that's your executives that's them saying look
we have to do business and we're going to do business or none of us have jobs so you security people
i.e. the sheriff and his deputies and Brody go do what you need to do to deal with you a little
shark problem but we got business to do that's exactly what trimisk career is like right now
and for the record only what three or four people got eaten while thousands of others enjoyed
the beach that's an acceptable loss to most business people you just got to go back and watch
and you're going to understand where we're at with AI it's fine it's fine
your ability to make that connection is god damn impressive
that's why you have me on the show i am stunned that is amazing it's amazing thank you by the way
on that on that topic i have had had this thinking about
where we're headed in like the potential harm because i was having a discussion with someone
about like there's a lot of the software that's being written today is being written by AI
right whether we like it or not it just is it just is you know probably a lot of that because
people who are writing it don't fully understand how to do it in a secure way and it's also being
done you know with under the pretenses of like cutting you know human resources out of the out of
the equation and so there's probably gonna there's probably a lot of what i'll call breach potential
breach energy being built up in the software that were we're building but i got to wondering like
when we as a civilization relied on horses people didn't die often i'm sure it happened
occasionally like people didn't die often in like horse collision accidents right and then we
adopted cars and and you have to imagine that like people were pretty terrified because like you
got this car that can go pretty fast and you can do a crash you know there's going to be some
number of fatalities and we we became okay with that like it's devastating like it's catastrophic
when it happens to you or someone you love or someone you know like it's awful and it's a tragedy
it's a travesty that like so many people get hurt and die in cars but like we have tacitly accepted
that as a society and i just wonder if we're going to have the same sort of thing emerge
around the security of it and then if that is the case like what does that say about our the
future of our industry right now according to AI Google's AI should take with appropriate grain
assault there is between 40,000 and 47,000 thousand annual deaths in the United States alone
from automobile related injuries we just accept that we've normalized that as a society that's
the cost of doing business right which by the way is is tragic like you said but we're good with it
which is and this is where it's like we've talked about a lot I know this is not your point and I'm
going off on a tangent we as humans are bad at estimating and understanding risk right because
this is a perfect example we report on a plane crash that that kills 200 people breathlessly while
that same day if thousand people died in automobile accidents around the world or whatever and then
then we're like hey it's okay like this is where I don't mean to get political but like when
when you're our politicians saying we're doing this if it saves just one life it's worth it that's
not true that's not that's not real if they actually truly govern that way we've had a five-mile
hour you know speed limits everywhere we don't because that is too costly on society so we absolutely
accept a certain amount of death destruction dismemberment trauma to do things and I think that's the
same thing with computer security companies will accept it because to be completely secure you'd
have to drive at five miles per hour or the equivalent they're of a business and we wouldn't get
anything done we wouldn't be a business sorry I'm down aware tangent here but it's that is kind
of where I was where I was going obviously it's it's it's not the same thing but my point is if we
are willing to accept that amount of death in the in the name of progress what is this the likelihood
that we're also going to accept breaches and stolen data and theft of money is just part of like
you know hey like AI was worth it you know yeah it does a bad job sometimes of writing secure code
but and like you know we we're gonna lose data but hey that was worth it and I don't mean that like
at a at like an individual company level I mean like at a societal level like we just you know
again from from car the perspective of cars like we've I mean yeah we make improvements we put
airbags in seat belts and whatnot and I'm sure we'll do the same thing with with software and IT but
it's a help I'm wrong by the way hope I'm wrong I don't think you're wrong and you
can think by the way about like improvements in security and cars is there's lots of evidence
that the the safer you feel in a car the riskier you'll drive right so I think that applies up by
the way to computer security I think the there's a there's a psychological model that I'm not smart
enough to articulate that the more your employees feel that the enterprise corporate IT is taking
care of security the risk here they can be I sure there's a correlation there somewhere and I'm
sure people have done research on this and I just am not aware now I should go look it up but
you know I I think there's always pros and cons I remember when air tags first came out
and there's a lot of advocacy and victims right people who were up in arms over the stocking
capabilities of air tags and how they could be used to victimize people in the terrible terrible
aspect of it why they should be banned but there's a lot of other people like oh look I just recovered
my kidnap daughter because of an air tag I just found my lost bag because there's always a pro and
con and and the problem I think we in society have right now is we only very rarely will listen to
anyone who has a balanced view of this we have we go to the extremes we are addicted to the outrage
we want one side of the other and we rarely can sit down at least as reflected by social media
including your site sir yeah sucks but it has some benefits so let's try to minimize the downsides
but not also throw the baby out with bathwater and I get it maybe maybe that is the role of activist
to only show one side of it but then there's that can't be the end of the conversation like
there there has to be a pragmatic center area of okay here the pros and cons we can work on
minimizing the cons while maximizing the pros but again this goes back to you can't just stop
innovating because of potential risk and I get that people who have very specific concerns valid
concerns can raise those and should raise those but it shouldn't be the end of the conversation
fair enough which is a weird thing to say as a security guy because we normally focus on just
the downsides this is big sense am I just rambling and coherently no no it's uh
basically what I was saying I'm sorry did I completely like sell your funder said much better
doubtful kind but doubtful all right let's uh let's move on we know that we've lost a lot of people
hey boy this one comes from security week and the title here is api threats grow and scale
is AI expands the blast radius so you know I think I think AI's involvement in here is a little tangential
AI sorry IT writ large is moving to you know API I mean you go back to like the service oriented
architecture really this is just kind of the new the new spin on on something that's been
emerging for a very very long time what I think is is new is that you know in order for AI to work
you have to have you know at pretty liberal use of of APIs that's where I think this starts to
come in and so they point out that in 2025 there were about 60,000 published vulnerabilities about
11,000 of those were related to APIs and that and if you look at the CISA kev list about 43 of
exploitive vulnerabilities or API related no I don't know how like I've had done my own
checking to see if that's you know a stretch or not but it's it's it's uh you know it's here in
print so I must be right they can't put anything on the internet that's not true
but they you know that the pointing out here that AI is kind of supercharging a problem that has
been with us for some time so if you think about like MCP which has become the kind of the de facto
standard of our AI actually does stuff it's very dependent on APIs and if those APIs for your
MCP server are exposed for example to the internet because like why would we put them on some kind of
restricted network that can then be leveraged to do bad stuff adversaries can and have been
using that kind of access to to do bad things through some you know through some uh some basically
hacked MCP server what one of the things I wanted to talk about was you're back way back in the old
days when I've worked at a cloud shop one of the challenges we had is with
inventorying the ownership of API endpoints and so if you think about like
typically in APIs it's like uh you know it's it's a HTTPS listener on a you know bound to a domain name
or host name and inside that when you make a call like you can you can make all sorts of you know
kind of an infinite variety of of calls to individual endpoints you know based on a path and you can
do post gets and and puts and and whatnot the actual owner of that the host the listener host
typically is pretty easy to identify where it starts in my experience especially larger
larger organizations that are more complex each one of those endpoints that is kind of obscured
becomes a lot more difficult to manage and you know the ownership of
it was it was always a a challenge to make sure that we had a full inventory of what those
endpoints were inside our APIs and who was responsible for them have they been contested and
are they conforming with our standards and and so like this problem it's like a it reminds me of
like a fractal like the the closer you look at it the bigger the problem gets and the closer you
look at it the bigger the problem gets and and so you know I think that's so again the little
tangential from or separate from the MCP problem but I think for a lot of organizations as we're
going down the path you're going to start developing the same sort of issues that we had as a cloud
provider or where that becomes an actual challenge you know we are whole sales we've talked about
embracing AI and as part of that we're going to continue seeing this explosion of API endpoints
making our widgets consumable by via API because that's just the the way the world is working
it is going to become a challenge for everybody
loosely speaking like inventory management of of API endpoints is going to become very critical
and that was the one that was one thing I wanted to say about about this like obviously
they should be exposed to the internet unless that's the intention and and if it is then you need to
make sure that you you know what the heck you're doing but on the other end this is I think a problem
that isn't intuitive like especially the inventory problem isn't intuitive to a lot of people
and if you're not keeping your eye on that it can get away from you really fast yeah I mean
this over simplifies it but I think of this as a tax surface management and do you understand
what your tax service is what's open what's not and how is it secured if at all inventory management
all that jazz and and what's even more interesting or at least more problematic is that a lot of these
tools are maybe even sending up these MCPN points without anyone being aware of it or not
explicitly being told to do it so yeah it gets interesting this goes back to you know something
we've talked about a lot about how the cloud and SaaS tools has democratized IT in many ways and
we used to be gatekeepers of this by you know hey you want to be in the data center and you want to
open the internet well you got to go through all these hoops because we own the firewall we own
the network and we own the IPs and we own the pipe all that's gone so this is just another
version of that where it's more and more things are just going to expose to the internet much like
your very favorite management councils of coordinates absolutely absolutely yeah I think it just
goes back to I think a well-run security program has got to understand all of what they're exposing
out there to the world and that's difficult that's easy to say that is tough to do
right so the dimension that I wanted to add or the nuance I wanted to add to that discussion is
that unlike your your coordinates or you know a WordPress site or whatever like the challenge with
API endpoints is that they're often the ownership of different endpoints is often obscured
not necessarily is easy so it's easy to know that you have something listening and you should
but it's difficult it's more difficult to be able to attribute each individual call
inside your API to a team or a person or what have you and I think that's going to become
much more important as we go so it's it's I think a bit of nuance that has to be added to
exposure management programs absolutely all right our last story it's pretty interesting
because hey if you are if you're if you're a criminal and you've been having trouble
getting your malware onto the computers this is the story for you it's amazing this one comes from
the register and the title is crims create fake remote management vendor to act that actually
sells a rat so at least they're doing like they're delivering on their promise they are they are
it's not fake marketing so that the deal here is it wasn't obvious to me at first on first read but
the deal here is that there's a I think it's red line is the name of the uh
releases belief to be red lines the name of the malware purveyor set up a company they I mean
they went so far is is to like create all of the infrastructure you know a domain a business
that allowed them to get an extended validation certificate which they used to sign their malware
which I have questions about the rigor of that process now I do too you know they anyway
they created a corporate website and they registered a domain secure and I guess a business name
secureconnect.com was was the name and they made it look like very overt like that it was a
legitimate tooling and they to the point where the the researchers for proofpoint who initially
discovered this believed that it was actually a legitimate remote access tool that was being
abused kind of like we see with all the others but what they later found was that it's actually
not legitimate it is actual malware that has been signed and the actual customers of this
secure connect company in quotes were you know we're we're basically
like consumers of the I should say resellers of the malware where they were using that
to deploy to victim systems and the the whole process was intended to make it
less obvious that malware was involved right because the extended validation certificate
was blinding a lot of the normal malware tools into malware tools obviously if it's a remote
access tool like you know your crowd strikes probably going to say like you know it has remote
access capabilities will do like that's its job and and so like the whole way through
this is this is kind of kind of badness I I'm this is the first time I'm aware of
I'm not saying that it is the first time but it's the first time I'm aware of where you know
something that that is elicit has been packaged up as you know a legitimate offering
and not for the purposes of like tricking people to come and download it from their website
as a marketplace for other criminals to come and buy a subscription so that they can
intern turn around and deploy it on to their you know their end victims systems so I was
going to make a joke about Microsoft Office or Windows but probably not bad taste yeah it's it's
interesting right we're just getting more and more abstracted in how bad guys and good guys do
their IT right right so this basically what what are the one of the campaigns that use
this malware was a it was one of the I guess the customer of the secure connect which by the way cost
300 bucks a month in case you're wondering they were sending out emails basically spam emails
in both French and English that were purported to be invitations to submit a proposal for a bid
upcoming project and it directed them to download a link with some material that had
information that was needed to bid on this proposal so probably they were targeting people who
would normally expect to receive those kinds of requests for proposals and when they got that
contained Microsoft Teams something that appeared to be Microsoft Teams that that also
included I think it dropped that particular binary dropped another binary which was the Trust
Connect agent which you know if you think about it like if you're an IT person and you see that
process come up and like you've never seen it before obviously you're going to go look for it
well you go to their website and you're going to see that well all you know all appearances are
it's a legitimate robot access tool so it's it's it's minimally going to create some confusion around
whether there's actually something going on or not but again you know it's it's
it's a remote access version allows the the bad guys to have kind of full full access on the
on the systems just just like a remote access tool maybe there is an axiom here of like
a sufficiently advanced piece of malware is indistinguishable for a productivity tool I don't know
I wish something here I'll work on for next show that I think it might be might be on something there
kind of like your potential breach energy that I think absolutely needs to become its own
discipline I see doctoral dissertations being written on on this concept that you've coined
there's a there's a whole genre of physics that is kind of parallel to
cyber security so yeah I think there's I think there's something there
yeah it's kinetic breach energy maybe and then we can get it all all sort of like you know
except in very small breaches and it's quantum breach discrete units that act differently
based on regulatory gravitational fields I don't know because we hide those we hide the small
right the the lawsuits about the wave and a particle there's something here I mean it there is
maybe it's maybe it's all in all in humor but I think there's something to explore there so
anyway I think the you know when I when I first read this my initial thinking was like
gosh how are people like falling for that like are companies literally going and signing up
for a you know a tool that turned out to be malware and the answer's no
this is this is just a very creative way of obscuring you know actual malware and so I think
the the takeaway for me is making sure that your incident response teams know that just because
there's like a website or behind the process you see running doesn't necessarily mean that it's
legitimate and by the way they also point out interestingly they point out in here that you know
they they they being proof point and some other unnamed party were able to take down the secure
connect site but they've already seen that it's been rebranded and put up on another site so like
it's this is this is going to continue on and I would expect if if in fact this has been successful
like the it's unclear by the way how like how you know how successful this methodology has been
but if it has been successful I would imagine we're going to see a lot more of this
and like you said that might portend that this you know the whole extended validation
signing certificates need to be re-evaluated as well I don't I don't have a good handle on any of
that at this point though well get on that and get back to me I will I will in my free time I need
a buyer next weekly staff meeting okay so we are close to the end thank you for for listening
do appreciate you thank you again to our patreon sponsors if you if you do again want to be
a sponsor you can do so on our patreon site it's patreon.com slash defensive sec we are also
on YouTube I dare say we have the coolest YouTube page ever right it's at defensive podcasts
it's pretty cool I mean it's pretty cool and you know don't don't be put off by the low number of
followers like it's still it's just it just hasn't gotten there yet we're like that cool band you
could find out about before anybody else that's right that's exactly right and and if you would like
to find links to all the stories we talked about and you were happening not to be watching the video
because you know we we have that on on each of the stories you can go to our website at defensive
security.org and look for episode 341 and you will find those links where you will also by the way
find links to our our patreon site and to our YouTube page so try to make it very easy for you
anyway you can find me offline I guess it's really not offline is online but offline from this
on my mastodon instances info sect that exchange just look for Jerry and where can people find you
I'm also on your fabulous mastodon instance that I've been woefully quietly like
and on x both at the same hand alert lerg you can find me there awesome we have reached the end
for this week thank you for listening I hope you all have a great week and we will talk again
next time take care stay stay safe stay warm I know there's a wizard coming up in the northeast so
so yeah indeed there's always something there's always something scary and somebody somewhere
absolutely take care have a great week everybody bye bye

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec