Loading...
Loading...

Take a network break. I'm Drew Conry Murray. I'm John Burke.
At John is away this week, so thought of virtual donut. Join us as we spin through the
tech news countryside. We're going to dig into research on a new wireless vulnerability.
Talk about sovereign sassy from Versa. There's a new router from HPE,
and Thropic is apparently going to war with the Defense Department. There's financial
results from Dell and Nvidia and more. We are sponsored today by Forward Networks,
Enterprise's Reliant Forward Networks Network Digital Twin for unified trusted automation
across distributed hybrid networks. Forward Networks Network Digital Twin ensures
connectivity is disrupted. Security posture is unchanged and compliance is maintained at all
stages of change management. Forward Networks Digital Twin is the difference between risky automation
and operational certainty. Learn more about how to protect your network and your sanity
with Forward Networks at forwardnetworks.com slash packer pushers.
And also stick around after the news, we have a sponsored tech bites conversation with Palo
Alto Networks on OT and IT convergence. In particular, we dig into how organizations can detect
precursor signals that may indicate an intrusion attempt of OT systems. And what to do about it,
we'll also discuss findings from a new OT security report, the Palo Alto Networks and partners
have prepared to help you glean in threat intelligence at the IT OT edge. John before we get into
red alerts, we had a follow up last week. John and I talked about a new deal for geothermal power
for Google data centers in the US state of Nevada, Nevada, Nevada, Nevada, Nevada. So apparently,
John and I pronounced the name of the state wrong throughout our whole conversation. Ed Horley,
who is co-host of the IPv6 bus podcast here on packer pushers. And also a Nevada resident,
hit us up on Slack with a gentle correction. So it is pronounced Nevada, not Nevada.
And a like in cattle or ranch. Yeah. Okay, there's a little
mnemonic device for me. I think I messed up because I took Spanish in high school and college. And
Nevada is actually, it was named by Spanish explorer. So I just always kind of went with the
Spanish pronunciation. But it is Nevada. And if you talk about the mountains, you know, to the west,
they're the Sierra Nevada's. So the confusion is quite natural. Okay, okay, it's not just me.
Oh, my wife had to correct me for about 10 years before I internalized it.
Yeah, I talked to my wife about it and she was like, yeah, you did say it weird. I don't know,
not even any support on the home front. All right. And just a reminder, if you have a query,
a kudos or a quarrelless comment about anything here today, you can reach out at packapushers.net
slash follow up. All right, John, take us to the red alert. Thank you, speaking of quarrelless
comments. The red alert this week is on Wi-Fi. So there were 1558 new CVEs issued in the
week ending February 26th and 3357 updated of the new ones. 294 are critical with a CBSS score
of nine or higher and 18 came in at 10 out of 10. However, this week's red alert does not touch
on anything that has a CVE yet. Instead, it's about something called air snitch, which is a
collection of methods that were described in a paper linked in the notes presented at the
2026 network and distributed system symposium just last week. Researchers, mostly from the
University of California Riverside, have poked the Wi-Fi client isolation mechanisms of various
access point vendors. Pretty hard. And they found that when they poked it not only did it wobble,
but it also fell down or they fell down in a variety of ways, including in the worst cases,
allowing for a complete man in the middle attack within a Wi-Fi network and across multiple
access points and setting the stage for things like radius compromises and other breaches.
Since client isolation is a poorly standardized component of Wi-Fi behavior,
the modes of failure vary by manufacturer and even by product line.
So the general issue is client isolation is supposed to prevent different clients on
connected to the same AP from talking to each other? That's correct. They should not be able to
talk to each other directly or see each other's traffic in any but the radio sense. They have to
go through the access point and then the network behind it to exchange network packets supposedly.
And you might be thinking, you know, most wireless traffic is encrypted these days,
so that should offer you some protection. But apparently what these research has discovered,
the mechanisms they're using, they get around that. Indeed, they can basically bypass encryption
and authentication mechanisms in some situations. And again, this is not every access point.
This is not every conversation or situation and access point. But it's enough to be hugely worrisome.
Because it would allow somebody who was able to sit down at a guest Wi-Fi network on a
corporate net to sidestep their way into private Wi-Fi and get in between inside Wi-Fi clients
and their access points essentially becoming the access point from that client's perspective
and able to do some stuff as a result. Even if your email traffic or your signal traffic is still
client to client encrypted and encrypted and invisible to it, there's plenty of other things going
on that give it a leg into your network and a position of authority from which to trilateral
attacks, including compromising the entire radio system on a wireless network if there's one in
place. So there are some caveats that come with this, as we've mentioned, the main one being that
the researchers mostly tested against consumer access points. However, they did find their way through
the security on the one meant for the enterprise access point that they did test against a Cisco
catalyst 9 to 130. So there's not as much solace in the consumer focus as you might want. And of
course, as many small and medium-sized enterprise network manager can attest, sometimes there are
consumer grade Wi-Fi access points in their infrastructure of which they may not have control
but knowledge. And the fact is this research is now out there widely available via the paper. So
yes indeed and it's getting news air play now from folks like us. Some of the vendors used in
the testing have begun patching to fix the problems that the researchers uncovered but some have
acknowledged that the root weakness is not in the software but at the chip level and so not a
amenable to an actual fix. So in the meantime, please revisit your zero trust architectures,
please re-examine your defense in depth, consider prohibiting your employees from using public
networks for now and stay tuned. There's going to be a lot of development on this one in the near
term I expect. Yeah, for sure. Yeah, sorry, we don't have a particular CVE to point you to and then
vulnerability in a patch you can go and download to fix this but the paper is probably worth reading
and we have links to it in the show notes. All right, let's jump into the news.
Chipmaker AMD has signed a multi-year deal with meta to provide up to six gigawatts worth of GPUs
for meta's AI infrastructure. The deal also includes pledges to buy AMD CPUs. As part of the deal,
AMD is going to issue up to 160 million shares of its stock to meta which accounts for about 10%
of the company's common stock performance milestones have to be reached for these stockgrounds
including product delivery and the share price hitting certain targets. Recording to a story in
Tom's hardware, the value of the deal is around a hundred billion dollars and AMD is reportedly
offering its shares to meta at one cent per share. As of recording time, an AMD share was worth
$203 so that is a bargain. That is a weird and impressive bargain and I have to wonder if it's
going to pass muster at some regulatory level that's crazy or if there'll be shareholder suits from
other AMD shareholders saying now just a darn minute. Although, yeah, and we've also talked a lot
about the AI infrastructure ecosystem being held up by these circular deals where
it's less about, is Ponzi's game the wrong term here? Not quite, but I'm not sure exactly what it
is. I again have to refer back to the Kilkenny cats which survived by simultaneously consuming each
other. I assume this is something out of Irish folklore. Some 19th century illusion, I don't
actually, I'd never trace the origin. I don't know the beginnings of that one, but I am
hoping that the deal language actually, in fact, addresses some specific number of GPUs and not
a power consumption threshold. That wouldn't make a lot of sense to me given that tweaking the algorithms
or the way that the math is done in the algorithms can result in dramatic decreases in power
consumption. I mean, we've already seen papers showing that dropping from eight decimal places to
four in certain parts of the ecosystem can reduce power consumption by anywhere from 40 to 95 percent
for inference operations. So, like I say, I hope they've written it up in more concrete terms
than power consumption. That's interesting because I looked at the press releases from AMD and Meta
and both referred to the deal based on the gigawatt number. I didn't see anything about a particular
number of devices being promised either way. So, that's going to keep the lawyers busy on that one,
maybe. I hope somebody gets busy and writes in a couple of notes about how that power consumption
gets measured. For sure. Also, this week, HPE has debuted a new Juniper PTX modular router,
carrier-grade stuff, and hackers have debuted a new Juniper PTX compromise. Oops, I know.
It's bad when things go in pairs like that. At Mobile World Congress, HPE debuted updates to
its carrier-grade Juniper PTX line of modular routers with a new PTX 12,000 line featuring
line cards with 800 gigabit per second ports and ready for upcoming 1.6 terabit per second
capacity. Each PTX line card currently delivers 43.2 terabits per second of forwarding capacity,
and HPE is offering a 22 rack unit 8 slot model that will support up to 432 800 gig ports,
and therefore a total capacity of 345.6 terabits per second. They have an even larger 32 rack unit
model with a 12 slots and supporting up to 648 800 gig ports and a total capacity of 518.4 terabits
per second for less prodigious use cases. They have also premiered a PTX 10,000 in two line
of fixed form routers. These are two rack unit pizza boxes and can support up to 28.8 terabits per
second of throughput and port speeds of 100, 400 and 800 gig. Those are substantial routers.
Yes, and having cut my teeth on what, 2400 bought routers way, way back when I got started. My
jaw is continually dropped by these things. Unfortunately, HPE has also had to deal this week with a
critical vulnerability on older members in the PTX series. Specifically, a vulnerability in the
platform's Junos evolved implementation, Janos OS evolved, sorry, implementation that can allow
an unauthenticated attacker to gain root access and execute code remotely. Tagged as CDE 2026-21902,
the root problem is in the on box anomaly detection framework, which runs at root level and is
on by default, and which should only ever be exposed to internal processes and internal ports
over internal routing phase. But can in fact, as it turns out, be accessed via externally exposed
ports as well. This bug does not appear to affect the new lines of devices, but affects multiple
older versions that are still supported. HPE issued a security bulletin linked in the show notes
with both both fast mitigations, i.e., how to turn off the anomaly detection function,
and links to patch OS versions. Okay, so we did get a CVE into the show today.
Yes, got to have a CVE every week. All right, links in the show notes. Also for mobile war
congress, a Dell announced new ruggedized X86 server for outdoor environments, things like
cell towers and cell sites, utility poles, roofs, and other exteriors. It's called the XR
9700 to then closed liquid cooled server designed to support cloud ran and edge AI use cases
is joining an Intel 6 sock, which provides up to 40 cores. And John, you looked up what they
meant by cloud ran here. Right, it was not a term I'd ever dug into before I'd see it, but not
not examined it. So it's interesting because the use case becomes clearer to me.
Cloud ran is when you run the data processing functions of the radio access network edge
on, you know, cots hardware standard servers like this one using off the shelf processors and
memory and everything else, rather than on specialized hardware that's completely tied into
the radio equipment. So you've separated the radio plane from the data processing plane to make
it easier to scale up all of the data angles. Because people often without using more bandwidth
and needing a different radio access edge, want to do stuff that requires more data processing.
And for the carriers, this includes shoving AI in functionality right out onto the cell poles.
Yeah, if you followed the news on T-Mobile not too long ago, they added live AI-based
translation of spoken conversations to their core network. So it's built in. And it's in beta right
now, but they are actually offering it to the service to customers who are willing to trial it.
But that means that, you know, potentially they've got to have AI agents
in the network that can listen and talk in both directions at the same time. So having something
right there on the cell pole that you're closest to would really help.
Yeah, that's interesting. That is an interesting use case. And that's one I can get behind for AI,
actually, that kind of universal translator, a very Star Trek-like.
Universal eavesdropper, too. Got to remember that.
Well, I feel like that's already happening with mobile calls, but, you know, you take your pick.
And also, you know, I was familiar with OpenRAN, which was, you know, trying to get
RAM-like capability with a COTS hardware. But the cloud aspect of it, does that mean they're just
like presenting that COTS gear in sort of a cloud-like, meaning, you know, sort of shared services?
In a way, yeah, at least shared workloads. Things can be floated into it in the container
run as long as they're needed, floated back out when they're not.
Got it. All right, quick break to tell you about our sponsor, Forward Networks. Forward Networks
creates a digital twin of your network, and the prizes rely on Forward Networks for unified
trusted automation across distributed hybrid networks. And if you think a digital twin is just
a network map or an inventory list, you should think again. Forward Networks Digital Twin is a
mathematically precise digital replica able to model all config and state data on every device
and trace every possible packet path. With Forward Networks Digital Twin, you can ensure that
connectivity isn't disrupted, that security posture is unchanged, and compliance is maintained
at all stages of change management. That's why enterprises rely on Forward Networks for unified
trusted automation across distributed hybrid networks. With Forward Networks Digital Twin,
you can eliminate change-related outages and subsequent headlines. That's why Forward Networks
Digital Twin is the difference between risky automation and operational certainty. Learn more
about how to protect your network and your sanity with Forward Networks at forwardnetworks.com slash
packet pushers. That's forwardnetworks.com slash packet pushers.
And on that subject of being able to track any possible path of a packet in your network,
folks have over the years run into problems with data sovereignty laws when buying
network services like SD-WAN or laterally Sassy. So last year, Versa Networks introduced a
comprehensive Sassy solution for organizations that wanted and needed full control of every aspect
of data flows in their WANs, but it was DIY. So this year, Versa Networks has introduced Sassy
as a service. Now, that's way too many S's in the acronym SSAAAS. But other than that,
it's a pretty interesting next step in evolution from their do-it-yourself because it is, as it
suggests, a fully managed implementation of their full Sassy stacks. So that's ZTNA, secure access,
threat prevention policy enforcement, inspection logging and visibility, and the control plane for
all those things. That can still guarantee that every operation adheres to the client's local,
legal, and regulatory frameworks. So the entire service stack is delivered out of what they
call sovereign controlled environments operating within the local jurisdiction of the customer,
but offloading hardware and software stack management from the customer to themselves.
Neither control planes, nor policy decisions, cross-borders, all critical components from
access control to operational oversight, run within your chosen jurisdiction, if you're the
customer, ensuring data residency, and operational sovereignty both while preserving the
architectural consistency of the single vendor stack. Yeah, I'm glad you explained that because
when I saw this, I was like, but wait, isn't part of Sassy already essentially as a service,
but now I think I've understood the distinction they're making here. Yeah, yeah, I know they made a hard
distinction last year between Sassy that you're in control of all the pieces of, which you can only
be in control of all the pieces of by owning those pieces, and having to run those pieces. So
there we go. Now, in looking at the press release, it sounds like this sovereign Sassy as a
service is currently available in Germany, Switzerland, and Austria, meaning that's, so it'll work
if you have sovereignty requirements just for EU, but if you need to get more country-specific,
maybe they're rolling it out in other countries over time, or I expect that that is the plan as they
can find available data center and network capacity in the different compliance regimes that they
want to offer services in. Yeah, and I understand the motivation behind the sovereignty movements,
partly for compliance reasons, maybe partly for trying to decouple from particular regimes that
you might not want to do business with, but I feel like one drawback here is, like you mentioned,
do they have the capacity for data center space, for co-lo space, even for the networking?
Exactly. So expect to see it in better provisioned countries first, or regions. So I would
fully expect to see it in Japan, in China, in Australia, New Zealand, India, Canada, Mexico,
Brazil, but what I fully expect to see it on a small south seas island, maybe not.
Right. And again, EU-focused sounds like Germany, Switzerland, and Austria.
Yeah, yep. Okay. Oh, and also, I mean, you got to figure they're going to follow the countries that
have the most rigid data sovereignty requirements. Yes. Yes, follow the compliance regimen.
So we also have news from Control Monkey this week. Love that company name that has expanded its
configuration disaster recovery services. Last year, they launched a cloud configuration
disaster recovery service aimed at providing disaster recovery as a service for a frequently
overlooked set of network components. The network configurations resident in enterprises,
implementations on the big infrastructure as a service platforms, AWS, Google Cloud Platform
Azure. This year, on February 24th, they announced that they have expanded coverage to more of the
cloud network control planes, specifically. They are now covering the configurations of content
distribution networks, as well as firewall rules, DNS records, route tables, and edge routing
policies hosted in network services and platforms, including Cloudflare, Akamai, Fastly, and in F5.
Control Monkey is built around the infrastructure as code approach and uses Terraform as its platform
of choice for infrastructure as code. Because most organizations don't use Terraform to manage
this new set of network services and platforms, things like Akamai and Cloudflare,
Control Monkey connects to the control interface for each, as the client,
inventories what is set up there and reverse engineers from the configuration that it finds
to Terraform HCL code for that infrastructure. Then it takes daily snapshots like a backup platform
should. If the need arises to do a service restoration, Control Monkey says users can do a one-click
restore into a new tenant environment. So this is saying if I am in AWS and there's an
outage and I want to redirect traffic maybe to another region, AWS or to another cloud, I can use
this service and it will essentially run these backed up network configurations for me on that
other cloud. My understanding on the AWS side is if you have to spin up a DR instance of your AWS
infrastructure, this will recreate the network settings for you in that as a part of the restore
operation there. Then if you've got to also restore services to your content distribution network,
it will do that for you. One of the use cases they prefer in their press releases and such is
if a ransomware attack overrides all that crap and you haven't been saving it yourself everywhere.
That's interesting because I think one of the things we hear about that folks don't necessarily
account for in their business continuity reviews as to recovery programs is the networking side of
a particular around CDNs and DNS and so on. A lot of us came up in the IA era of the golden
config file for a router or switch model and you would just pop that back in whenever the need
arose and it's possible some folks are trying to run things the same way in these other
architectures. But if your copy of the golden config file has been encrypted by ransomware
and if the live settings have been overwritten by ransomware, that will not serve. So yeah,
there's the possibility out there of the need for this definitely interesting service.
All right, moving on. The AI maker Anthropic is going to war with the US Department of Defense
over a contract dispute on how Anthropics AI tools can be used. Anthropic has contractual
agreements that its AI won't be used by the Defense Department for mass domestic surveillance
or to control fully autonomous weapons. The Defense Department says it wants any lawful
used to be allowed and has demanded that Anthropic revise its contract or face consequences.
Those consequences could include either the DOD designated Anthropic as a supply chain risk,
which would not only kill Anthropics $200 million deal with the Pentagon and according to
the Roger article could bar defense contractors from quote to deploying Anthropics AI during work
for the Pentagon, meaning they wouldn't just be cut off from the Pentagon. They'd be
disbarred from working with other Pentagon contractors when those contractors were dealing
with the Pentagon. The Pentagon said Anthropic has until Friday, February 27th to agree to the
Pentagon's demands. On Thursday, Anthropics CEO Dario Amode released a corporate blog that said
in part quote, these threats do not change our position. We cannot in good conscience accede to
their request. So now we have to wait to see how the Pentagon is going to respond. Just a program
note here, we are recording this on the late morning of Friday, February 27th, the DOD based on
last time I checked hasn't responded. So by Monday, when you're hearing this, there could be an entirely
new story, but this is what we know so far. And it's in a way heartening to see them
clinging to this because it's been written into their end user license agreement, basically,
or the licensing agreement governing use of their open source code from the beginning,
that there were certain use cases that it was not to be used for even though it's open source.
Right. Yeah, I'm afraid to let myself be happy about this because I've been disappointed by
tech companies so many times just sort of rolling over or being evil or whatever. So I'm surprised
and pleased that Anthropic seems to be holding the line here on this because it's a risk. Obviously
they're risking this giant contract with the Pentagon. They're risking being potentially
shut out from other customers. They are in an AI race. They are spending a ton of money and
needing to make money to make up for that. So I'm taking this stand, I think, is quite fascinating.
It is going to be interesting. If the amounts of money flowing through the AI space weren't so
truly mind boggling in the first place, it would be a much more shocking story and stand,
but we've gotten to that point where 100 million here and 100 million there and pretty soon
you're talking about real money. Right. Yeah, I think part of it is that Anthropic is still
privately held, so it can resist that kind of shareholder pressure that might hit a publicly
traded company. I also wonder, from my perspective, it seems like a moral stance they're taking,
but it could also be a strategic stance in that they have set themselves against their competitors,
OpenAI, Google, XAI. Will are you guys collaborating with the DOD on things like this,
which maybe is a win for them? Maybe not? I don't know. That is an interesting thought,
and I could definitely see folks thinking that there needs to be an alternative because there
is going to be a group of people or countries that don't want to deal with companies that
have exceeded to these demands. So yeah, we'll see. It's going to be, well, we're going to find out
pretty soon. I'll also note that Anthropic does have contracts with U.S. National Security
agencies. As of recording time, I wasn't able to determine if there are similar stipulations
around domestic mass surveillance, in particular, attached to these contracts. If so,
you sort of wonder, is this going to get Anthropic booted out of the U.S. government entirely?
I don't know. The other part of the problem is that here in the U.S., there are no meaningful
regulations or laws around how AI can be used, particularly by the government, so we are just
in uncharted territory. Yeah, stay tuned. It's going to evolve rapidly now, I think. Speaking of AI,
which we seem to be doing increasingly over the weeks, Nokia and AWS have collaborated on an
agentic AI control plane for 5G network slicing. Also at Mobile World Congress, as it turns out,
Nokia announced, on February 24th, a new collaboration with AWS on an agentic AI powered 5G
advanced network slicing solution on a live 5G network. Do you and Orange are the first service
providers to explore this innovation in their own networks? Intent-based slicing measures live
network KPIs, things like latency and download speeds and adjusts the radio access network policies
in real time using that slicing functionality to meet SLA's. AI agents can boost network
performance for selected 5G-based stations, for example, to guarantee or improve cell phone
performance where it's needed, for example, to provide first responders and public safety authorities
better network service during emergencies. Those first responder nets are a big deal for folks
like AT&T. That's critical, Dr. Yeah. Yeah, yeah. And then they can also, though, in much more
prosaic circumstances, the natural disasters dynamically adjust the capacity of different access points
and access network processing units during high demand events like concerts or professional sports
matches and do things like optimize performance for folks who pay extra to get VIP treatment while
they're in a venue or to make sure that payment applications and vendor systems are working,
so all the folks selling merch or food can process transactions in a reasonable amount of time.
That's interesting. I feel like I've heard the ran market folks talk about networks
slicing a lot. I don't have a good sense of whether carriers and telecos have actually
taken them up on it because I don't know that they really like to mess with their networks in
real time or near real time. Well, it's interesting. We've been talking about slicing for, I don't know,
almost a decade it feels like at least six years, but until recently there weren't many networks
that could actually use it because it requires a native 5G infrastructure to function that the
hybrid 4G 5G infrastructure that was the center of the first big deployment of 5G has been the
center couldn't do it. You have to have these 5G advanced infrastructures that are 5G at all
layers radio in before you can take advantage of it. It took years to get people used to the idea
of maybe going to a fixed wireless access solution for their branch office network connection
in place of getting cable in or getting a fiber pulled in. They've been building towards having
a market for services that would rely on slicing to meet SLAs at the same time that they've been
working towards having networks that can actually support slicing.
Well, maybe now that they've added a Gentika AI, that'll, that was a missing piece.
Obviously, obviously. The main missing piece was the infrastructure, but 5G with autonomous
adjustments at a level maybe above eight-year-old algorithmic adjustments. Yeah, it probably can do the
job more gracefully. We'll see. Right, links in the show notes, if you want to find out more,
we'll wrap with a couple of financial updates. First, Dell Technologies reported Fiscal Q4
and Fiscal 2026 results. For Q4, Dell had revenues of 33.4 billion up 39% year-over-year
and net income of 2.3 billion up 43% versus this quarter last year. For their full Fiscal 2026,
Dell revenues were a record 113.5 billion up 19% and net income of 5.9 billion up 30%.
Over Fiscal 2025, by business unit revenues were fairly evenly split.
The infrastructure group brought in 60.8 billion for the year, up 40% within the infrastructure
group. AI Server revenue in Q4 was up more than 300%. We all knew who the culprit was, but
there it is in the numbers. The client solutions group brought in 51 billion for the year,
up 5%, just 5%, which when you're talking about 40%, that looks pretty small.
Client solutions, that's PCs. PCs, yeah, yeah. Within the client solutions group,
consumer sales were flat for Q4 and actually down 8% for the full year. Normally,
I think that might raise some flags for a company like Dell, but AI is obviously
paying the paper here, so. Yeah, it's going to say it's just noise statistically
against those kinds of gains elsewhere. Although, I'm curious, you're taking on this. We've
been hearing about the high cost of memory and other components. It seems like the whole
market is just throwing everything they have at AI infrastructure, which could affect gaming and
maybe PCs and phones. This is going to become a longer-term problem for companies like Dell.
If people were in an awful hurry to upgrade their PC fleets, I'd say yes, but I had gotten
the impression over the last two or three years that that was not a high priority.
So, this might be a good time to have a little bit of a hiccup in that replacement cycle
as they come to terms with supply chain issues driven by the AI demand.
Okay. Looking ahead, Dell sees no signs of the party ending. It's forecasting fiscal 2027
revenues as somewhere between 138 and 142 billion, which would set yet another record.
And finishing with Nvidia, you probably heard by now, but Nvidia announced their Q4 and
full-year fiscal 2026 results. And it was also happy days all around. For Q4, revenues were
68.1 billion up 73% versus last year and a new company record. Net income was 43 billion
up 94% year over year for fiscal 26 revenues were 216 billion up 65% also a record with net income
of 120 billion also up 65% that little gap between their revenues and the net income is frankly
astonishing when you just talking about Dell, they brought in like a hundred and it was 113 billion
and only net income of 5 billion for the year. So the margins and videos getting are fantastic.
Unbelievable. Yeah. By business unit, it's data center business, which does include GPUs,
brought in a record 193 billion for the year up 68% for AIPCs and gaming,
full-year revenue was 16 billion up 41% its visualization and automotive businesses are
essentially rounding errors. Although they're hoping for more. But compared to data center,
it's peanuts. So Nvidia only offered guidance on its next fiscal quarter, but it's anticipating
yet another record. They're expecting revenues for Q1 of their fiscal 2027 at 78 billion plus
a minus 2%. All right. That wraps up the news. John, where can folks get more from you?
If they want, they can take a look at Nemirtis.substac.com. That's N-E-M-E-R-T-E-S.substac.com.
Yeah. That's a great newsletter. I subscribe. I recommend you do it as well. I'm Drew Connery
Murray. I'm Blue Sky. I'm Drew C.M. and I'm blogging at PackupRushers.net. Thanks to our sponsor
Ford Networks and do stay tuned for our sponsored tech bites episode with Palo Alto Networks. We're
going to dive into OT security. Let's come in right up. On today's tech bites sponsored by Palo Alto
Networks, we dig into how organizations can detect precursor signals that may indicate a broader
attack chain and what to do about it. We'll also discuss findings from a new OT security report
to Palo Alto Networks and partners have prepared to help you glean threat intelligence at the OT ITH.
Our guest is Shuzao, Senior Vice President of Cloud Delivered Security Services at Palo Alto
Networks. That's you. Welcome to the podcast and can you just give us a quick definition of OT?
Yeah. Thank you. Have I me? The OT stands for operational technology. It consists of a hardware
and a software system. So that's it directly changed our physical world. The typical OT system,
including for instance, industrial robots, controllers, sensors, and having machines used in
the manufacturing or in the gas, energy, and transportation. Sometimes OT systems are also called
the cyber-physical systems. And the typical OT system vendors are Simons, ABB, Honeyveils,
Schneider, and Rockwell. As you can see, quite different from the IT world.
Yeah, for sure. And then Palo Alto Networks recently published a report on active defense
for OT environments. Can you tell us a little bit about this report? What it's based on?
Yes, it is actually a great report and a great collaboration with both Simons,
the leading OT vendors, and Idaho National Lab, the leading cybersecurity labs, specifically focusing
on OT security. And the joint today we analyzed the nature data from 61,000 Palo Alto firewalls
deployed in the OT environment, along with 20 years of historical data contributed from both
the Simons and Idaho National Lab. Okay, so why are we even having this conversation on some
level? Because OT networks should be air-gapped, right? So what is the attack factor that we're
dealing with? How are these networks getting penetrated? That's good question. If an
believable amount, the most popular attacking path into the OT network is Internet. Most
guilty people would mistakenly believe that their OT network mission critical equipment
were perfectly air-gapped. However, our report and our data indicates that information and
the belief is wrong. And therefore, reason for that. For instance, number one, they so-called
IT OT convergence. So in the last few years, more than about OT equipment are connected to the
IT network, even to improve productivity reduced cost. So in direct today, those OT equipment
get exposed to the Internet through the IT network. Because nobody knows, you know, IT side does
not know what's in the OT side and vice versa. Number two reasons, remote maintenance.
Mini-OT equipment like controllers robots need to be regularly maintained. More and more
maintenance are contacted remotely via VPN or 3G4G connectivity. And the number three,
they adoption of AI. In order to improve those equipment, this productivity, efficiency,
those equipment, more and more are demanding connectivity to the AI in the cloud.
That's the reason we saw astonishing 20 minutes OT device were directed exposed to Internet
last year. I astonishing 300 to 32 percent increase over 20 to 23.
I just want to get that number. You said you found 20 million OT devices?
1 million directly Internet exposed OT equipment. Wow. Okay. Yeah. Okay. And that's dangerous
because as you mentioned, OT is responsible for that for physical stuff, moving a robot around,
opening closing valves in a complex chemical or gas infrastructure, that kind of thing. So
that kind of exposure can be dangerous. It is. So we know from experience that if, you know,
an attacker gets into an IT network, that that time window for a initial compromise to an
exploit can be hours, minutes, sometimes even seconds, are we dealing with the same kind of
time frame for OT? Interesting enough. It's quite opposite. Over data indicates once the hackers
compromise a typical OT network, the hacker wheels stay there for as long as 185 days before
the hacker decide to take any actions. So that's quite a counter-interdictive. The reason for that
is number one, hacker know they are getting into the mission critical OT network.
So they want to take good amount of time to fully understand what those devices are,
what their, you know, missions are, how they get connected, so that they could plan to maximize
their gain, whether their gain is through, you know, financial gain or damage the entire network.
So they need to, they need to do careful planning. And another interesting reason for that,
based on our report shows, because the cybersecurity aspect of the OT network was overlooked
in the last decade also. So hacker knew they had time. Nobody needed to watch them. So they had a
time to do as much as they want. So this long dwell time means they can do reconnaissance, they can
get a sense of the land where they're on, they can get a sense of what systems they might be able
to get after. And because no one's looking, they just have the time to do that. Yes.
And do they do all of that? Like, for instance, typically on the IT side, like they usually command.
Most of those OT equipment do support command lines. And they also offer support APIs. They can
support running script. And they can establish, you know, connectivity to, you know, a destination's
outside. So hackers took their time to do all that through now window.
Your report talks about precursor signals. Is that what we're talking about here? The kind of
things about the activity happening at the command line. Are those precursor signals?
Exactly. Exactly. And by the way, majority of those equipment, whether in touch robots,
or controllers, they're actually running on very old operating system. It even are windows
CE type of operating systems. And they are institutionally fully capable machines so that the hackers
took their time, right? I understand the system deeply. You know, I understand their roles. What's
the consequence? You know, if to launch in those type of attack, what command is supported?
What API can be executed? What scripts can be wrong? They took their time to do all those
stuff. So can you give some examples of what precursor signals might be?
Yes. For instance, the most, I would say, information reach precursor signal is actually
command and I execution. Think about it. In the typical manufacturing assembly line,
a worker typically will not issue a command and I directly on a mission critical robot have their own
control system to control that robot. Or after the hacker logged into that machine in order for
hacker to collect information, find the next target. In many cases, hacker issued command and line.
And they command itself become a clear signal or those precursor stage. If we can keep tracking
those signals can help us detect those compromised very early on. Are there other kinds of signals
that people should be watching out for? Yes. Another very popular one is a scripting. Remember,
those controllers are running on fully capable Windows operating systems. They can support all sorts
of scripts, sorry, they can support all sorts of security non-grages, support the most APIs.
And those are very dangerous signals. So how do I see these signals? How do I, how do I get hold
them? Do I need a special purpose tool designed just for OT devices or can my whatever traditional
IT monitoring infrastructure I might have be able to gather these signals? Exactly. So traditional IT
tool, which on the IT side of how many separate skills tools are more external over IT equipment.
However, unfortunately, those IT tools do not understand those OT language. So in the OT network,
they speak a very specific language called Industrial Control Systems, ICS Protocols.
That's a completely different set of language. That's why almost all existing IT tools cannot
understand what those languages mean. That's why in the last decades there are many
wounded solutions trying to understand those OT specific language and then offer some help. However,
unlike the IT environment, a typical OT environment, whether it's a manufacturing,
or oil and gas field, they have a lot of constraints. For instance, the physical space
limitation, the temperature, so on and so forth. That's why it's hard for them to adopt another
solution. They do have space to squeeze in. That's why we believe the best solution to help them
is to extend existing IT tools with this OT specific knowledge so that they can understand those
OT specific protocol and applications and offer help. That will be the least we will cause
least of friction or customer to adopt. So with this mean, like having logs or telemetry sent back
to sort of an IT traditional seam, or are we talking about having firewalls within a
no-teen network to look at traffic and send that data back? How are we getting these signals where
are we sending them? That's a very good question. That's why we think the best of our recommendation
is to leverage the MITRE attack ISAC from work by a typical OT network of so-called Purdue model.
They follow the Purdue model models for layers. And our recommendation is by fully understanding
how attackers are getting to OT network, which of a report offers detailed information.
We start with the top-down approach by following the Purdue model and the MITRE attack framework
that start with the IT OT separation so that to deploy for instance a firewall that understands
the OT specific and language and protocols and then gradually go deep to protect every process
and every mission to critical assets. And those six logs and communications can sometimes,
they need to be integrated with the IT. Remember, more than 72% of OT attacks are actually
originated from the IT site. So it's important for both IT and OT have a comprehensive understanding
of both IT and OT signal together to fully understand the entire culture.
Do you feel like then this is something that would fall under like a soccer sensibility security
operation center where you're trying to collect all these signals or where does all this go
and how does it get analyzed? That's a very good question. So right now what we see is
there's still two different scope thoughts on who should own the OT security.
Yeah, it's right now unfortunately it's 50-50 split. I thought that I believe the IT security team,
like say so team should own OT security. I thought that I believe that IT does not understand OT,
so the OT itself should own the OT security. What we believe is we believe the right direction
and the solution should be the IT OT convergence. It is joint responsibility and a joint ownership
of both IT security and OT operations. So with this convergence, we believe all signals need to be
converged to a single, you know, SOC platform which understand both IT and OT site to help both team
to fully understand the situation and to be proactive in a adjacent cybersecurity
chatty use in the OT network. That makes sense, but I'm also curious, like we talked about this
really long potential attack cycle or dwell time. How is it like a SOC operator supposed to sort of
keep that long window in mind as they're doing throughout hunting or research given that this
could be a month long attack chain? That's actually at the vantage goal of a defense site,
which means we, unlike the IT site as we mentioned, the typical attack happens in now in minutes.
Now with this long window, if we collect enough signal and then apply the AI to fully understand
those signals to stitch the signal from both IT and OT sites, stitch them together,
understand the entire key chain, we actually can do a much better job to be more proactive in
preventing those attacks from even happening. That's a unique opportunity for the OT security.
I mean honestly, it's nice to hear that for once the defense site has a little bit of an advantage
maybe, or at least some time. Well, Shu, thank you for this, you know, we sort of dipped our toes into
OT security, so thank you for that. Any last thoughts before we wrap? I think all the
strongly recommended audience to go download the great OT security wipe paper joined efforts
by the Palatone Networks, Siemens, and the Idaho National Security Lab, and this paper offers
details to help us understand the uniqueness of OT network, but most importantly, we should embrace
the IT OT convergence so that empower both IT and OT team to be proactive in addressing OT security.
Again, excellent. So that report is called Intelligence-driven active defense report,
2026. We'll have a link to that in the show notes that accompany this podcast to make it easy
for Findwall. Also have other resources, including a launch blog and some insights from Palatone
Networks Unit 42 threat research group around the findings in the paper. Thanks, Shu, for joining us,
and thanks to Palatone Networks for being a sponsor. Sponsors ensure that the packet pushers can
continue to offer high quality, deeply technical content for your professional development for free.
That includes more than a dozen technical podcasts on networking security, IPv6, DevOps, and more.
We've also got an industry blog, two weekly newsletters, a community Slack group, a YouTube channel,
even an IRC group. You can find it all at packetpushers.net, free with no login required. You can also
hear us on Spotify or Apple podcasts. That's been not least to remember that too much networking
would never be enough.

The Fat Pipe - Most Popular Packet Pushers Pods

The Fat Pipe - Most Popular Packet Pushers Pods

The Fat Pipe - Most Popular Packet Pushers Pods