Loading...
Loading...

This time it’s not a rerun!
]Please consider supporting the DefSec podcast here.
Here are the links we discuss this week:
I
Welcome to the Defensive Security Podcast. This is episode 342. My name is Jerry
Bell and joining me today is always is Mr. Andrew Callett. Good afternoon Mr.
Bell how are you? I am so good at hurts. I'm here at the beach but the weather's
been crappy so I'm here here reporting from the closet. It's the safest room
in the house right just hiding in a closet until the weather improves or the
only quiet. Quite room in the house. You got a hurricane named after your wife
that's running around your condo. Is that the problem? No, she's a lovely woman.
I should say it's amazing. But good. I'm glad you're at your undisclosed Southern
command location. Absolutely. How are you doing? I am good. It's been a little crazy
busy as usual and but here we are and hanging in there. Stay in a life. That's
like it. Awesome. Awesome. Awesome. I am back to the the gainfully employed so
that's that's been quite exciting and yeah that's I certainly liked that
working much better. I'll just leave it there. Yeah we get painfully dictated to
that pain paycheck and health insurance. Especially the health insurance fully
cow. Yeah. We'll get into the politics around that but I hear you. I hear you.
Crazy. All right. Anyhow, first off I do want to just reiterate our deep
appreciation for our Patreon donors. Thank you very very much. A quick reminder
that if you do donate to support our show you will get episodes about a week
or sometimes two weeks before everybody else and and we think that is just
a peachy keen and worth the price of admission. Thank you so much to those of you
who donate and we love you and if you want to do so you can also you know be
among the cool crowd by going to patreon.com slash defensive sec and signing up
there. I echo everything you said but not in a creepy we love you way in a you
guys are awesome you help make the show way not in the right I'll be creeping
around outside your house at two in the morning way that's completely
different. Correct. Correct. Not in a not in a base. I don't look good in
a dress. I'll also tell you that. Well, you know, it doesn't work for
everybody. Which by the way is a great segue into that the thoughts and
opinions we express in the show are ours and not those of our employers. So
so there you go. And with that let's jump into some stories. The first one
comes from bleepie computer and the title here is rants more payment rate drops
to record low as a tax surge. So there's a couple of really interesting
data points in this report here. Number one is that the the instances of
ransomware attacks went up by like 50% in 2025 over 2024 but the amount of
people paying or entities paying dropped by half. So you know you can kind of
think on a proportional basis is about a quarter of what it was but before
you get worried before you start to worry about gosh like what about the
ransomware operators and their kids are they hungry are they starving are they
like what is this a problem and the answer is no because also the amount that
each ransomware victim who does pay has gone up substantially and so that it's
still close to about a billion dollar a year business and that's pretty
amazing. There's a lot of interesting almost conflicting data points in this
article that I found really interesting. The caveat is always it's from one
particular company it's their view of things it's their research I don't know
how accurate as well as just assume it's accurate for the sake of this
conversation. One of my thoughts was really interesting is that median as you're
mentioning the meeting ransom payment rose significantly up 368% here's what
here's what surprised me last year the median ransomware payment was only
$12,738 much much much lower than I would have expected much and in 2025 the
median was again still what I consider very low $59,556 US so we're we're always
here about these sort of like multi-million dollar ransom demands and payments
and things get negotiated but this to me tells that the vast majority of these
negotiations are in the low five figures right and I suspect this I suspect
that's probably because the lower ones don't make the news like nobody's
gonna write you know article about somebody who paid a five thousand dollar
ransom right that that's it doesn't that news where I mean I would but nobody
would read it but it's interesting honestly did not realize that the average
payment was that low or at least the median payment not the average I should
say the median the other thing that I thought was interesting is they saw a
huge proliferation in number of ransomware threat actors their their theory is
that that was because law enforcement has been much more effective against the
centralized big boys so as they're killing the big boys it appears that the rats
are scurrying and dramatically increasing smaller operations and there's more
of them operating as a result of law enforcement going after the big one so
it's a little sort of counterintuitive consequence and I'm not sure which is
better right is it better to have a dozen big groups that we know how to
counter better or hundreds of small groups I mean now there's good but which is
the least bad so I thought that was interesting it's it's fascinating I infer
I mean it's it's not explicitly said but I infer that the part of part of the
reasons for the jump in number of attacks despite the declining instances of
people paying is that the the the amount the average amount of of a payout I
guess is it work is is going up and so so that it's almost like to to the
ransomware actors they play in the lottery right the more times they play the
better their chance of having a winning ticket so to speak and so I I I suspect
that if that if that trend continues like if that if the amount
per payment average or median median amount to
per payment continues to go up I would expect that we'll continue to see the
the raw number of instances continuing to go up because you know there's an
incentive and I think also at the same time the amount of effort that has to
go into it especially you know with with AI assisted attacks it's becoming
much more scalable to run large numbers of attacks so I I'm not surprised by
that and and I think it portends bad stuff for those of us who are responsible
for protecting companies well just like corporate America the ransomware groups
are trying to drive efficiency with less staff and more profits right exactly
they're figuring it out they might have the unionized I think that might be
the only option ransomware operators local 70 war right right I mean the
other thing I quote from the article that I found interesting initial access
brokers hackers who sell access to compromised endpoints to ransomware
operators reportedly made 14 million in 2025 roughly the same as last year this
is only 1.7% of the total ransomware revenue last year the initial access as a
key neighbor so I thought I thought that was interesting to code points one how
do we know that it's not like they're publishing court of results about their
expenses and costs but it's interesting that we think we know that but the
average price for now we got access declined for approximately 14,000
27 dollars in Q1 of 2023 to 439 dollars in Q1 of 226 he came to automation AI
assisted tooling and oversupply from info stealer logs have shaped the industry so
there's so much supply out there of initial access that it's forced down the
price of initial access yeah but it was 1400 not 14,000 but it's it's gone down by
like what's 75% so yeah that's I thought the way they characterize the oversupply
was was fascinating at a characterization because I mean basically they're
just swimming in creds I guess what this tells me is if we want to make this
unprofitable we just have to release all our info well it's true then you put
the IABs out of business and you know what you know what the people who are
posting their creds into checking them in the GitHub they're ahead of the game
they are they are like thought leaders their thought leaders there and soon AI
will enable us to more quickly and effectively leak our data I was I was
creating some shorts from our last show just before this episode and we
haven't this that episode hasn't dropped public yet so once it drops I'll put
those out there but then again by the time you hear this it doesn't matter but
we we just had some baggers about AI on our last show that I'm just looking
forward to getting out into our shorts and hopefully the algorithm will share
them with the people because there's some there's some great commentary there
yeah we'll be good if they get they get more than a hundred views that'd be
pretty awesome we used to get like a thousand it's so weird I don't I don't
understand YouTube ever I from from what I can tell a lot of people are very
frustrated with YouTube's recent changes so I don't think it's just us what
if we just got deepfakes to replace us with like fitness models so I think
that's that is idea number one idea number two from what I infer is that rage
bait is everything now so you would have to say like the way to avoid
ransomware attacks is by putting crystals by your computer as an example
prove it and then you get people and then you get people to argue with you like
no you know what color crystal and then you say well the purple ones of course
and they say no that will absolutely not work you have to have the green ones
and then you just and then everybody starts arguing about what color crystal is best to protect
against ransomware and and and it just spirals and then suddenly you're at like a billion views
and we can retire so that's but then we're also branded as as crystal racists well maybe
I mean we'll try we'll try next week we'll see how it goes anyway let's move on next story
moving on moving on our next story comes from cyber security dive and the title here is
ransomware is now less about malware and more about impersonation and they point out that
identity is replaced malware is the biggest threat vector replacing actual malware
and that hackers are increasingly using legitimate credentials
which has the net effect of making it difficult more difficult for
the security teams in organizations to actually defend against them so I don't think it should be
an incredible surprise if you look at you know the recent history of like the Salesforce
attacks and in what not you know they they point out that um you know i think it was 50% of
these attacks are happening against what they call manufacturing and critical infrastructure
and the reason for that apparently is because they're these organizations are
dependent on kind of continuous operations and so they have an incentive
to pay and then get past it what I think is what what I think is interesting
it kind of changes the dynamic on how we have to approach things you know we I think we've over
the past 20 30 years we focused a lot on perimeter security and technical security and we've
relied a lot on kind of quote the integrity of our employees and and whatnot and I think the way
we have to start thinking about these attacks is that they're really insider threats and I think
this this is kind of something that we've not really reckoned with like we we didn't we we haven't
broadly speaking unless you're in some you know specific
certain specific regulated industries you're probably not really focused on you know user behavior
and what this is really saying to me is obviously it's important to protect
to the extent you can guard against the the credential theft in the first place
and and mitigated if it does happen by using things like past keys and and whatnot but
you know assuming that that's not possible or you've done that to the extent you can these
attacks really point to less about well I need more I need more CrowdStrike and you know more
about like I need I need the ability to detect anomalous behavior of what appears to be legitimate
users and it's not that your people have suddenly turned bad it's that you can't distinguish from
the activities of a you know an actual employee and somebody who is stolen the access of
one of your actual employees and is doing something bad right yep it's it's easier than going
after exploits we have really as much as we've preached about this and talked about this and
said this a thousand different ways passwords are super weak especially single factors that are
easy to capture a lot of employees are known to reuse them so it couldn't be just your environment
that gets impacted or the source of the data league they could be reusing the same password someplace
else it has a data league that password gets out in the world bad guys know that they go find it
they come use it and try to get to your environment so but there's other things too like if they get
they get some sort of foothold on your endpoint they can use that trust relationship in the background
so even if you have full phishing resistant you know phyto2 enabled or or UB keys or whatever
if they get on your endpoint it's easy to ride on that connection that they've established and
into everything so you're right there's some some interesting stuff about and your behavior
this gets tricky right these things they sound great on paper but every time I've implemented them
they can be very noisy they can be very trigger on a lot of things that that are not high fidelity
that look like they could be awesome bad behavior it could just be somebody doing something a
little bit different so you've got to have an ops team ready to sort through those findings or you know
wait for the AI enabled super duperversion to fix all your problems for you but it is interesting
like we spend a lot of effort patching we spend a lot of effort hardening we don't spend a lot
of effort on how do I know that that employee really is that employee every time they connect
right exactly exactly you know there's there are technologies out there for example conditional
access in in in Microsoft on froze pretty good it opens up some opportunities but it's not
particularly widely deployed and obviously it's not the only solution but it's the one that kind
of comes to the top of mind but yeah in my experience user behavior analytics is kind of
less even less has even less fidelity than data loss prevention so it's you know it what while I'm
going to speak out of both sides of my mouth here they got well while it is important given the
trajectory that we're on to you know think about monitoring the you know the behavior of of
insiders or insider credentials it's also extraordinarily hard in time consuming and maybe
maybe this is something that can be improved through the use of AI I don't know I'll
withhold judgment on that for now maybe at some point it's inevitable but for now I don't know
that we have we have the tooling but I don't know a lot of companies who have the ability to apply
the amount of resources required to make effective use of that so I think it's going to be dependent on
like figuring out where to put your thermometer you don't need to worry about it everywhere you don't
have to worry about monitoring the behavior in every context there's certain key processes most
likely that you have where you would want to understand if somebody was not actually who they
said they were and they were trying to to conduct some fraud or something like that
the other the other I thought was very interesting too they point out in this article that
they're seeing a lot of while there's obviously huge amounts of fraud happening they've this is
from a report out of cloud flare they're seeing a huge amount of attacks where the targeted
amount is averaging about $49,000 and in the reasons for that is that apparently a lot of
organizations especially bigger organizations in those categories that we talked about
manufacturing and in other critical services they have a $50,000 threshold I guess is very common
where additional approvals are required and so they kind of optimized by coming in just under
the common approval threshold and that makes it more likely that they get they are able to
get the money without racing any alarms before they get it that's like a pure version of risk
versus friction right there yeah like yeah under 50k we'll take the risk a little more to reduce
friction yeah that's interesting well you know maybe that means that you should have extra
alarms between for any amount between 45 and 49 yeah I mean the so the money the money has to go
somewhere and I like it's a rough thought right but it seems like a lot of these kinds of
attacks are going to are going to be in some way perpetrated by like changing the destination bank
account or you know setting up a new vendor and so to me it feels like the opportunity might be
less on the transaction approval and more on the vector by which the money gets transferred
um because I my intuition is that there's a lot more approvals of funding for payments
than there are times that you're changing bank account information for suppliers or adding new
suppliers like it happens but it doesn't happen nearly as much and so I think that's where that's
to me it seems like that's that's an opportunity to catch it as well fair enough but I think this
is also one where hey is anybody talking to finance about this I mean once they start losing money
yes might might want to try to talk to your finance team a little bit yeah yeah is that as I
as I often say a lot of organizations only care about disaster recovery after they've had
a disaster and so I think this is quite likely one of those things where you know if look if you're
if you if you experience one of these and you lose forty nine thousand dollars depending on the
size of your company it may just be like you know that sucks and we're gonna have to take take the
hit but it doesn't make sense to spend you know a million dollars a year to guard against a
fifty thousand dollar a year potential loss and so that's again comes down to that we have to make
yeah that is absolutely true enough we have to make it makes it tough but but you know if you if
you're in a position where this is just can keep happening over and over and over again
you know maybe that million dollars does make sense because it might also
mitigate other risks too so anyway it's um it's a crappy situation getting crappier
but that's that's the job it's a theme of our show
it's the job all right let's see moving on to our next story and this one is
we didn't we didn't talk about the initial instance of this because it kind of happened
fairly close together but the title here from security week is
exposure management company watchtower reports that a recent Cisco catalyst SD land vulnerability
initially exposed as a zero day is now being used more frequently by threat actors
so the the initial is this story in place the title in place there was a zero day attack that was
being exploited for some period of time using Cisco's SD way in technology and what was I think most
alarming about that is apparently it was going on for some period of time and the normal deployment
mode like this is intended to kind of abstract the way the complexities of linking diverse
locations creating that you know kind of a of a access mesh for your virtual access mesh for
your organization well the this vulnerability conceptually allows an attacker to create their own
remote location like it's added to your network and has access to your resources in other locations
and so it by itself is not like most likely it's not it's not enabling
it's it's not allowing them access into your payroll system but it does allow them to attack
systems on your network in a way that probably doesn't
raise the same levels of alerts is other you're coming from other locations so
this is bad like when I saw this it's super bad
you'll get SD win is is one of those it's a rest tradeoff right I mean it's it's made like point-to-point
in mesh VPNs radically simpler radically less you have radically less overhead to manage but at
the same time like this was this was kind of the worst case scenario that could come out of that
and it's apparently happened and it was a zero day like that you know this is one this is not one of
those like yeah if you if you patched it be all right this is very similar to like an edge VPN
device that's got a problem right it's it's meant to be exploited it's meant to be on the edge it's
meant to be open to the internet it's not like you know you can easily hide it
you know it is a very widely owned and used and deployed piece of technology for sure yeah the only
I can say is it really highlights the importance of having your internal network instrumented
so that you can you have some amount of visibility if something is
originating from inside your network and that you know I think that's useful way beyond this
particular use case you know because there's so many different opportunities for a threat actor to
land on something inside your environment and then for them to do anything useful they have to
move laterally and typically in order to do that they have to probe and and you gain credentials
and whatnot so there's a lot of opportunity with things like honeypots and honey credentials and
honey files and whatnot that's the only I could come up with or if if this is a kind of technology
that you embrace you know your your main defense against this is going to be detection
yeah it goes back to just our last story too which is like impersonation so
the defense is against impersonation are similar to this if you assume it's a breach so this is
this is like I can't stop the initial breach okay well what can I do to detect it internally
and what are my kind of like we were talking about earlier what are my crown jewels or whatever
you know most sensitive most critical things in my environment that I care about how can I more
accurately instrument for detection and you detection engineering around those areas
with some level of high fidelity the problem with all this always becomes this continuation of
false positives of false negatives and where do you want to be and false positives you're putting
a lot of load on your ops and other teams who have to investigate false negatives you're missing
stuff so where do you want to be in between both have problems and there's no I mean if there
is a perfect solve we we'd all have implemented and these problems wouldn't exist it very much has to
do with understanding your environment understanding what's normal in your environment and ideally
finding some step in that attack chain that is abnormal enough you can detect it easier said
than done yeah there are my recommendation and I've done this in the past these two
is to look for opportunities to add high fidelity signals into your
environment like creating an account that should never be accessed and then having that account
alerted on and if you ever see a login event for that account you know something has gone horribly
wrong you know putting putting a file that has fake credit card numbers on a
file server and then turning on audit logging if you you know with with ACLs that don't allow
general people to access it if you see that file being accessed something has gone horribly wrong
and there's lots of different permutations you can do on that which aren't complicated
and will result in you know it's it's not 100% right because you're you don't know
if the attacker is going to take the bait but it's you know that there are things you can kind
of sprinkle around I one of the things that we I liked to do was for systems in the environment
they had no ability to be accessed from outside you know we for those didn't they were accessible
outside you would want you would put in place things to limit them the amount of failed log
attempts right so if after after certain amount of time certain number of failed logons it would
it would just block your IP for for either forever or some period of time not a big deal that happens
all day every day on the internet fine but on the in on your internal network if you see systems
if you see something happening that is locking out accounts internally it's not to say that it's
not like a misconfigured you know batch job where the password got changed that can happen
but you know that is also a higher fidelity signal if you're seeing accounts getting locked out
or you're seeing IP addresses being blocked that for because of a internal IP address is being
blocked because of a too many failed logon attempts I think that's also a potential indication
of attempted lateral move make sense make sense flip sides if you know you've you've got to tune
those just right so that you're not burn out your ops team with false positives they start ignoring
yeah yeah 100% 100% all right so moving on to our next story
the this one is from dark reading and the title here is nation state actor embraces AI malware assembly line
all roads lead to AI lately this is talking about a threat actor who I think they have this
amazing quote the embrace to our actor embracing mediocrity was pretty good the story here is
and when I read this I immediately equated it to something that's happening in the kinetic world
which is using a bunch of low-cost drones to tie up your super expensive interceptors
what's happening here is this threat actor is using vibe-coded malware that apparently in many
cases is really badly written like in one instance just they talk about a tool that is designed
to steal browser credentials but like it had instead of actually having the ability to connect
to a command and control server it just had a placel it was incomplete and what what apparently
this particular threat actor is trying to do is just bombard you bombard the victim with a lot of
different kinds of malware in kind of rapid succession so that you know you you kind of get
tied up trying to triage and analyze and you know you can't analyze that you know most most
companies don't have infinite amount of resources to assess what's you know what they're
EDR or antivirus is detecting so they kind of get that process gets gummed up and meanwhile
it just keeps on coming and something is going to get through now that the one thing they also talk
with with AI this AI strategy in particular is they're using AI to write the malware in
uncommon languages like nim and zig and crystal and some others which historically a lot of the
endpoint detection tools have trouble figuring out malicious intent and so
that hasn't been a big problem to know because there's a fairly specific novel skills to have
and typically not the kind of skills that people who write right somewhere or or malware have
a specialty in but now it doesn't matter because they can just they can have you know they can
go and search for obscure programming languages and then ask their LOM to produce
malware in that particular language and and off they go and they don't even care if it's good
right right because they're using as a distraction they're using it as something to wear down
your defenses your your attention in some ways right but it does open up the opportunity for
offering QA for five coded malware as a service or an AI agent that performs QA
for your five coded malware it is a little interesting too that how telling it is that
we have somewhat tuned our defenses whether it's EDR and diversics that are
to the 90 percentile right or to the 85 percentile to the bell curve
and this sort of shows novel stuff that's very different
apparently gets passed a lot of it which is a little bit of an indictment of the EDR industry
or behavior based detection and of course I understand the challenge a little bit which is
they've got to be fast they've got to be low impact they've got to be they can't put a bunch of
load latency to the system right so they've got to optimize for the likelihood
but it does point out hey edge cases are powerful
now in this case it's being used poorly but it's not going to continue to be used poorly
no I mean it at some point you know it'll transition to most of the malware that is being
created by this even kind of with low effort attempts to produce it are going to produce relatively
high quality malware that's just the the progression that we're on the reason I wanted to talk
about it here was because it's it's a bit of a novel technique right and so if if you're responsible
for the security of an organization and you start to see like a whole bunch of crappy samples come
in your way you know it may be part of a broader focused campaign it may not be a whole you know
bunch of disconnected crappy attempts it might actually be somebody who is genuinely trying to
obscure what they're doing and reminds me back of you know back in like the 2000 the early 2010s
when we saw a lot of DDoS attacks being paired with you know other other kinds of attempted
infiltrations you know where they were trying to tie up the security team with responding to DDoS
attack and meanwhile they're off doing something else so not entirely different but anyway it just
beware that these kinds of spray and prey attacks are out there and you know I think we should expect
that we're going to see more threat actors behaving like this Jerry just rub some AI on it you'll
be fine well obviously that is the answer right we we have to have we have to have AI firewalls that's
where this is all headed AI all the things that's really the only option
100 percent all right move on to our last story now caveat on this because this is I think
you know a fair amount of marketing but I have had discussions with some fairly prominent
bug hunters who have told me that this is this is the this is real right we're we're seeing a sea
change in things anyway that this comes from the hacker news and the title is Anthropic finds 22
Firefox vulnerabilities using cloud opus 4.6 so as the title suggests Zilla and Anthropic jointly
published an announcement about how I think I think there were like 90 plus vulnerabilities or bugs
inclusive of vulnerabilities identified by AI that the Mozilla has been fixing in Firefox
22 of those are sorry 14 of the 22 were classified as high what I thought was interesting
is contrast this with if we talked a couple of weeks ago about the experience of the
maintainer of curl the open source package curl who stopped accepting any you know any vulnerability
reports that were a I generated and you know we had we didn't cover it on the show but there's
the big blow up last he was last week where a the maintainer of is a math plot lib anyway it was
a library received a AI generated vulnerability report and the maintainer rejected the report because
it was originated from AI and the AI allegedly wrote up a hit piece about how discriminatory this
person was blah blah blah blah right what what is interesting I think is that this this is happening
right especially when I talk to to professionals in the industry who are actually doing this they're
saying this especially cloud is doing a very good job now if you read through here there's
kind of a dichotomy between its ability to find vulnerabilities and stability to write exploits
with a lot of abilities it's pretty good getting good at the former it's not so great at the latter so
that that aside I think the kind of out of hand rejection of AI generated vulnerability reports
is not going to have long legs and the reason for that is because
people are finding vulnerabilities in software with AI whether the maintainers choose the fix them or not
is you know it is kind of going to become the question it doesn't change the fact that the vulnerability
exists and has been found and so I think we're going to start having a problem where if we have
software maintainers not patching AI originated vulnerabilities we're going to have a big problem
well it could it could very well turn into hey which AI model did you use and that that sort of
credibility to that model might be how they filter could be one hour but I agree with you
the other thing I think about that that comes to mind is great it's good that we know about
the vulnerabilities that's excellence one of the things that I see often as a problem is engineering
teams whether it's open source maintainers or corporate environments already don't have enough
time to fix the vulnerabilities we've already found there's another aspect of that more vulnerabilities
without context is tough but it's better to know about the vulnerabilities not the vulnerabilities
exist whether we know about them or not so it's good that these tools are finding so I'm thinking
about how do I bring this earlier into the development lifecycle as that code's being developed
to find it and stop it from shipping and I think that's where this ultimately is heading right I
need to be able to enable the developers as they're developing code not to write insecure code
and that's where I think this real value ends up being but ultimately yeah if AI is going to be
is proven to be effective at finding vulnerabilities the bad guys are sure as hell going to use it
so if we're not using it as good guys we're blinding ourselves to very real risk yeah 100 percent
I think with with most commercial companies what you said makes sense and we'll probably see
that sort of change naturally happen what what I observe is in open source space which by the way
like everybody in their dog relies very heavily on lots of open sources there's there's
like this rejection of all things AI in a lot of context in in open source and so I think I think
it's gonna be a while before we see a general acceptance of that but I I think it's gonna I mean
I understand the perspective I understand the reasons I don't necessarily disagree with a lot
of the reasons and a lot of the objections get it but the reality is the tools are out there
people are using them they're gonna be finding vulnerabilities and unless we find a way as you
point it out to integrate that into the development process and find and fix them
earlier in the process they're there they are the vulnerabilities are there people are gonna
become increasingly aware of them and if if they make it out into the world so to speak without
having been fixed it is a risk now I don't know one of the things I was thinking of is
on some timeline does the fact that we have AI vulnerability identification
becoming increasingly effective does that ultimately start to drop the instances of vulnerabilities
that remain because they get because they get fixed either earlier in the process or like the
developers of software just throw up their hands and say like this is ridiculous I can't
handle it or you know over time people just get better at writing code that doesn't
have vulnerabilities that the AI finds like is that is that the where we're going I don't know
I don't think it's the last one especially because we're abstracting more and more of actual
code development away from people I mean unless those vibe coding agents which I know like some people
are now trying to stop using that term it's the whole marketing fight unless we teach those vibe
coding agents secure coding practices but even that it's I think it I think it really depends on
the environment the developers are in I think it really depends on what they are incentivized to do
I think if you look at a lot of environments they've got probably static code analysis tools
legacy static code analysis tools it's probably has more findings than they have time to fix anyway
so again it comes back to prioritization is this more noise or is this more signal
and as a abstract professional I want to find as many vulnerabilities as I can
but that's only 20 percent of the problem how do I get them fixed and how do I prioritize
which ones gets fixed is another big aspect what I can see this absolutely enabling are like if
you're if your source code gets stolen you've got a lot more to worry about now you know in the
past you could say well maybe not a big deal because it's hard for people to take my source code
and find vulnerabilities not anymore that was easy or you know let's say I'm a third party auditor
of some variety or a pentast company and then you know I wanted you a source code enabled
pentast enables that to be far more effective there's only second order effects I can see from
this coming and again they're just going to get better from here but I think my ultimate endgame
for me and the environments I work in is cool how do I take this and shift this way left to my
development cycle and get it looking at this very early before things even get pushed to prod
on a Friday afternoon as as you so like to to say what better what better I think we'll get there
you know it's and ideally hopefully this becomes a feedback loop of hey you've got code
in your you've got AI capabilities that can look through insecure code maybe that would inform
your AI when it's vibe coding for you and this becomes a cycle that of course that's a cost
all right and we've got to give people pay the cost but I can see that improving if there's an
automatic mechanism of hey I'm now going to scan your code as I've created it for common vulnerabilities
or vulnerabilities and automatically correct them would be great and then see where we end up so
I think it's a net benefit but I think it's only a small piece of the puzzle yeah I think it's
I think it's going to take a while too for some of the benefits to flow through so we don't have
the story to talk about but there was a there was a story floating around about how
somebody was looking at AI generated commits into GitHub and what they what they found was a lot of
the same error you know a lot of the same coding vulnerability was being introduced over and over
and over and over and over again because you know that I'm guessing and guessing some it found
some sample code somebody wrote on stack exchange all the time ago they had some vulnerability in it
and so it's being kind of propagated out but I think that's a short-term problem it I mean
not to downplay like the significance of the problem it's creating today I don't I don't mean to
don't mean to downplay that at all it's going to be a big problem I think broadly speaking
we have an absolute horrific mess ahead of us in a couple of years from all the code that was
being written by models that don't don't know how to securely code and people using those
models that don't know that they should they don't know what they don't know and so
we're creating this mountain of security debt that we're going to have to collectively pay down at
some point and those models will always be the cheapest right there will always be a premium price
for the secure coding capabilities just like there's a premium price for oh you want SSO enabled
for your enterprise ass solution that's extra money oh you want good logs that's extra money oh you
want secure code that's extra money right those GPUs ain't free not yet goodness so there we go that
is the show for this week I do want to say I number one I appreciate you but number two
I think while it looks awful in some respects right now it won't always be like that you know if you
look back let me just narrowly think about AI generated images two years ago right people had
six fingers and three arms and and think about where is that today and I know
I'm not interested in debating ethics of like pictures created by AI or anything like that my
point simply is that the state of the art and its ability to do a quality job is improving
radically we're we're gonna have to deal with the absolute mountain of horse crap that it has been
producing but that's it's inevitable I mean it's just it is what it is I hate it we all hate we can
all hate it right but it isn't going away that's like that's the one thing that annoys the crap
out of me is I see so many people who are absolutely rejecting this and not seeing the bigger picture
this is this is not like a this is not blockchain like this isn't gonna go away like it is here
and you are on the bus or you're or not I mean there's this is a there isn't a point
which AI is going to the businesses are going to say like oh that was a dump that was a bad idea
well it will happen it will be after a massive war launched by the silence and a couple years of
running from them and then we decide to reject all technology so you've got a long way to go
from here and there and that's when we send all the fleet into the sun although the silence did
go off on the round so who knows they're still up but now you're right it's it's and we talk about
this there's probably going to be net positives along with the negatives it's not an either or it's
not black and white and this is something that I think our industry specifically and and those who
are loudest that the way the algorithms do it the activists the ones who care about a very narrow
slice of the and they're right to care but they have very narrow just this view right they just
care about this thing and they look for everything that reinforces their position on that and they
don't care about the possible benefits of the trade-offs they want one view of reality
both positive and negative right those are the positive side of look at what this is going to do
for us and not caring about all the negatives and the folks on the negative side look at the negatives
and not caring about all the positives like anything there's going to be a balance and there's
going to be positives and negatives we do see a lot of positives come from AI we do see a lot of
positives in health care and medical diagnostics and other areas that are starting to to make real
differences and save people's lives and you know we'll probably see things and breakthroughs and
medical technologies and drug development other things like that and and things like this last
story could make our lives easier in finding and fixing vulnerabilities so bad guys can't
I do think there's a value in having a pragmatic view and like any technology we're at the early days
but we are in this weird echo chamber that everybody who gets any air time is at one of the
spectra of the other and that's all we hear it's the barbell the typical barbell situation
so it makes it for the average person I think challenging to understand where we're at
and to weigh this and they hear either it's going to kill us all or we're all going to be able to
sit on the couch watching Oprah eating bombons getting a ton of money for free like that's that's
it those are the two extremes people are thinking about and how this weird messy middle ground
it seems to me at least the majority of people or at least the people who are getting air time and I
think there's a lot of things under the weird messy middle ground of pros and cons and it's not
like a static thing like it's constantly going back and forth and changing and adapting and
oh that's not great let's you know everybody gets a vote right who are involved with this in terms of
like how these tools evolve and change I'm not saying they get equal votes but it's
something bad happens we can adjust something great happens we can reinforce it like
I just think people are very very very hyperbolic about this and I think it's a very
major issue endemic in the human condition but you know it's
whatever you want to say about AI like we all have to collectively get up for work
tomorrow's Monday morning like we all have to get up for work and we have to go to work
on Monday morning and we have to figure out how to do the thing for for our employer
and you know we have those to pay we have food to buy we've got you know it's um
we do have to live with we have to live in the world that we're we're in
I do think by the way I do think that you know aside from like I mentioned this
you know coming catastrophe that we're going to have to deal with I also think that it is true
that we're going to see a reckoning in AI soon but it's going to be in my estimation it's going
to be more like a reckoning uh that we saw in 2001-2002 right where you know we are we are
collectively shoveling money into a furnace right now you know and and that is not sustainable
and so there will be there will be a reckoning and it is going to be ugly it's going to be
catastrophic there are going to be a lot of the players in in the AI space who won't be
around in five years or ten years like they're going to be gone but that doesn't mean
that's that's a different statement than like AI is going to be gone it's it's because if you if you set
in in 1999 and you know we were shoveling money into the furnace of the internet
literally shoveling money into the first not at the same speed we are today but
what's happening and there was a giant crash a giant correction but guess what
the internet is still here and in many ways is far more sophisticated in ways that we probably
couldn't conceive of back in 1999 yeah we are old enough to have been through this before and
this is the creative destruction of capitalism there is this brand new opportunity a game changer
when we went through before it was the internet and the rise of the commercial internet
we've probably been through it other times but I'll just stick on that one for a minute now it's
the rise of AI there's going to be a whole bunch of companies trying a whole bunch of things
in hindsight it will be obvious yeah that makes sense this isn't but you don't know until you try
and there's a whole bunch of people with a whole bunch of money who have bought into the premise
that some of these guys are going to find out how to make this work and make us a ton of money
and that's how these sees and angel investors and private equity work they're chasing the odds
that somebody's going to get it right but that means a lot of people are going to get it wrong
and they're going to go to business that's how tech innovation often progresses in these sorts of
inflection moments it looks scary it looks devastating it looks crazy maybe the way it's
supposed to be if there is such a thing but there's also a real world sort of impact to people
which suck and I get that but I mean we saw this model play out in the .com era that profits didn't
matter it was a new math profits were immaterial it was only eyeballs eyeballs eyeballs market share
mind share guess what the fundamentals still fly and for every breathless incredibly confident
and convincing analysts that say profits are on no longer matter in this world as confidence they
were they were also completely wrong so be careful who you trust be careful who you listen to
they can be as convincing and as authoritative as anybody giving a nation documentary as a narrator
I was trying to come up with a cool name but I think it's still being incredibly wrong
we won't know until we're on the other side of this by the way that doesn't mean that I know what's
right I just am highly pragmatic about those who think they know I don't know if they know or not
some of them we write a lot of them we're wrong yeah I know that right now there's a there's a lot
of consternation about the job market and it and in security specifically for good reasons I
mean it's it's for the first time in my you know my lifetime and I think probably for the first
time in the history of IT and security we're seeing kind of a downturn right and at least in terms
of of of employment you know for for my entire career security budgets only went up year after
year after year after year and I'm not saying that like collectively they're necessarily long down
but I think the mix of headcount to you know op-x slash capex is is rebalancing and you know there's
there's just less people there's there's a desire to find other ways to scale and so you know we're
we're seeing a lot of as you described it creative destruction like historically consulting
companies and MSSTs were huge huge higher you know huge environments for people starting in their
careers like that was where you a lot of people got their start but now a lot of that is you know
they're driving to automation right that because that's they're trying to find ways to scale without
hiring people and it is what it is like it's done this what's happening and so it's creating
a lot of pain for people who are coming into the marketplace at a time like for the past 15
years we've been we've been consternating over how we need more people we need more trained people
and so we've created this engine to produce trained people with degrees and certificates and Bob
you know all this stuff and they're coming out in mass and there's fewer and fewer jobs for
this increasing number of people but one thing I will say is I think companies right now are looking
for whether they will say it or not they're looking for people who can help them through the forest
navigate AI from a company perspective or like from a security perspective and so I think if you
the advice I would get people right now is like that is the thing that if you're if you're coming
into this new that's what I would be focusing on is how do you help an organization improve its
security function using AI or or embrace AI in a less risky manner that sort of thing not that
there's a great lot of great options and there's certainly lots of companies who will sell you
as we like say blinky boxes but you know they're they're they're looking for leadership
and looking for people who are comfortable with the technology who can help them through the forest
that's where I think you can find some success right now I think it's well said I agree
so anyway hopefully hopefully that helps somebody anyway I digress
so if you if you want to find links to the stories we talked about you can do so by going to our
website it's defensivesecurity.org look for episode 342 you can find us on the most amazing YouTube
channel I think that perhaps has ever been it's it's like really early days so if you follow it right
now you're getting in on the ground floor so just be aware just look it's kind of like if you saw
Taylor Swift playing at your local bar that's exactly exactly just say look for for defensive
podcasts on a YouTube once again thank you to our patreon sponsors if you do want to sponsor us
it's patreon.com slash defensive sec and that is actually I'll go back one and we can talk
about where we're at so where can people connect with you not that they ever have with I guess
leads with me so where can people find you if they wanted to wow I don't I don't know what that
means but you could find me at x an infosecotic change which is your phenomenal mastodon
instance with the same panel at lyrg lyrg all right you can find me on mastodon it's at Jerry at
infosecotic exchange and with that that is as I said before at the end have a lovely time
lovely week ahead hopefully we'll be back again next week I would be traveling home again so I
don't know exactly what my schedule will be but we'll try so we'll figure it out have a very
big everybody take care

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec