Loading...
Loading...

Take a network break. I'm Drew Conry Murray.
John and Johnson and John Berker are ways from
pleased to welcome Tom Hollingsworth from the
Futurum Group and Tech Field Day as guest
opinionator. Tom, nice to have you again.
It's always a pleasure to be here with you, Drew.
Well, thanks. We got a bunch of stories to cover,
but first day reminder that after the news,
we have a sponsored tech bites conversation with
StatSeeker, they're a network monitoring company
that collects high fidelity network data to help engineers
and administrator get visibility into physical virtual
and logical interfaces to find and fix problems.
We're going to explore how StatSeeker works here,
customer use cases and find out how StatSeeker
differentiates itself from other network monitoring products.
And by the way, if you like network break,
we've got a whole bunch of other podcasts
over at packapushers.net, including day-to-cloud,
pack-a-protector, heavy networking,
IPv6 buzz network automation nerds,
and N is for networking.
It's a nerdy tech analysis and compelling conversations
all about infrastructure cloud,
professional development, and more.
All right, we'll start off with our red alerts.
Cisco has issued two CVEs for its Secure Firewall
Management Center or FMC software,
each of which has a risk score of 10 out of 10,
great for the Olympics Gating Terrible for CVEs.
Both vulnerabilities are tied to the web management interface for FMC.
The first could quote, allow an unauthenticated remote attacker
to bypass authentication and execute script files
on an effective device to obtain root access
to the underlying operating system and a quote.
The second could also allow an unauthenticated remote attacker
to execute arbitrary Java code as root
on the effective device.
Regarding the second vulnerability,
Cisco notes the attack surface is, quote unquote,
reduced if the FMC management interface
doesn't have public internet access.
However, Cisco says there are no workarounds
for either vulnerability.
Cisco has release software updates.
You have to apply these software updates
if you want to fix the bugs.
Tom, I feel like calling anything secure
when it's related to security is just like asking for it.
Yeah, it's only a matter of time
before somebody finds a hole.
That's one of the things we teach a lot of people
when they're starting out in security, right?
Is there's no such thing as perfectly secure
or perfectly patched?
And there are ways to alleviate some of those issues.
Like one of the things that Cisco said,
why on earth are you allowing access to FMC
from anywhere that's not internal to your network?
Like, you know, we think all the way back to like 1996
mission impossible, right, with the black vault
that William Donnelly was in.
He may have been onto something there.
It's like one guy with access to one server
to be able to fix things.
Yeah, it's a bottleneck, but I promise you,
unless your Tom Cruise, you're not hacking that thing.
I guess I'm feeling like that's not really scalable.
So this is, but this is the world we're in.
The problem though is that people are scaling their
ability to find these kinds of vulnerabilities
in their attack surface.
And if you're not scaling along with them,
you're going to have to scale way back.
Yeah, that is also true.
All right, let's move into the news.
Sassy provider Kato Networks has announced a new offering
that to my mind kind of sounds like automated threat hunting,
although that's not what Kato is calling it.
They're calling it dynamic prevention.
The feature collects and correlates anomalous signals over time
to look for patterns that might add up to malicious behavior.
If the system determines these signals,
do indicate some kind of malicious action.
It can automatically apply rules to block activities.
Kato says the service, which is included in its Sassy platform
can correlate months of network and security activity
from Kato sensors, including DLP, IPS, and anti-malware.
This is an interesting idea.
I think it, and it does kind of fill in a gap where,
you know, you just sort of point in time,
security controls might not catch an individual action
or an individual signal, but they are part of a longer attack chain.
And if you can't afford a sock or you don't have threat hunters
in your network, this might be worth a look what are your thoughts?
I think it is because we're starting to see this more and more
with the rise of using automated tooling to do these kinds of attacks.
You're in essence kind of fuzzing the network to try to figure out
where you can get entry points or what can slip past the sensors.
And if you can gain a foothold, then you can deliver some really nasty payloads.
And I think that more and more companies are going to start doing this
as they gain access to using AI tools,
be able to correlate this data more effectively.
Because one of the things that we've noticed over the years
is that while humans believe they're really good at pattern recognition,
they actually really suck at it.
And so having a system to kind of sit there and say,
wait a minute, we're seeing, you know,
we're connection requests about the same time every day
from roughly the same area of the world
that are trying to deliver these certain kinds of payloads.
You might want to look into that.
That's just one example of being able to put those pieces together
to kind of raise suspicion.
And yeah, it really is, it kind of looks a lot like threat hunting to me,
but people don't buy threat hunting.
They buy dynamic things or they buy preventative things.
Because prevention is an op-ex cost, not a capex cost.
So people are more than willing to pay monthly for that.
Yeah, threat hunting is definitely a capex cost.
I think the key thing here is like the time element of months.
Because as you said,
sometimes humans aren't great at pattern recognition,
but if that pattern is happening in a day or in a few hours,
maybe you would pick up on it, but if it's happening over weeks
at a time in a stealthy attack,
that might not make it into your eyeball.
So it's an interesting idea.
All right, we've got a few quantum stories to cover.
First, Google has announced how it plans to support quantum-resistant cryptography
in its Chrome browser, rather than add X.509 certs
with post-quantum cryptography to its Chrome root store,
which Google says could introduce performance and bandwidth issues
for TLS connections.
Google's going to rely on something called
Merkley Tree Certificates.
I don't know if it's Merkley or Merkle.
Anyway, Merkley Tree Certificates or MTCs.
MTCs are part of an IETF working group looking to address the impact
of larger post-quantum signatures on public key infrastructure.
So what are MTCs?
The Google blog says, quote,
MTCs replaced the heavy serialized chain of signatures
found in traditional PKI with compact Merkley Tree proofs.
In this model, a CA signs a single tree head
representing potentially millions of certificates.
And the certificate, this air quotes that Google uses,
sent to the browser is merely a lightweight proof of inclusion
in that tree.
Google says MTCs will make it easier to drop post-quantum algorithms
while avoiding bandwidth and performance penalties
of classical X.509 cert chains.
Thoughts your Tom.
So I see this being proposed a lot in IOT devices
when I talked to Digestert Mike Nelson years ago
was talking about the fact that if you try to throw a big
RSA 2048 certificate at your thermostat,
it's going to barf, right?
Because your thermostat CPU is not designed
to handle that kind of calculation load.
So I hear what Google's saying about bandwidth
and performance for TLS, but I can't help but wonder
is this an effort to make those low-cost CPU
ARM CPUs that are going into Chromebooks
last a little bit longer?
Because as everyone knows, like PQC,
one of the reasons why it works as well as it does
is because it's doing lattice-based encryption,
which means you have a lot of calculations that have to happen,
which makes it really hard for classical
and quantum computers to break it.
But that does increase the overhead.
Honestly, with today's internet,
I don't think that the customers are going to bat an eye
with an extra lost second or two on a connection.
Yeah, some people are.
But realistically, this is about devices
that they want to put Chrome on
that are not going to be tolerant
of the extra processing load.
Now, I think that that will change.
I really see this as a stop-gap measure
for the next few years
to start seeding values about store-and-harvest attacks
where people are collecting data that exists now
and then they're going to replay it later
against quantum algorithms that are optimized
for cracking things like RSA and PKI.
So I think that what you'll see is probably within two to three years
when better ARM processors come out
when more robust CPUs come out,
we'll see this start to drift away
because those processors will be optimized
to do certificate generation and key exchanges and things like that.
Kind of like we saw with Intel where they had cores
that were dedicated to doing that.
We'll see that kind of start happening
as the software becomes better optimized for this
because it is the reality that we face now.
This is no longer just somebody's idea of what's going to happen
if someone knows how to break RSA.
We know what's going to happen,
which is something we'll talk about in a second.
Yeah.
Yeah, that's an interesting point
about Chrome on ARM processors and IoT devices.
I'm sure Google wants Chrome to be everywhere.
So it needs to be able to one run and support PQE
without falling over and to make Chrome viable everywhere.
So yeah, I can see that.
Just to wrap up on this,
Google is adopting a multi-phase trial and rollout phase one,
which is currently underway as testing the feasibility of MTCs
by using MTCs with real internet traffic,
but they're backing them up with traditional X.509
additional phases will begin the following year.
We've got a link to the Google security blog
if you want to get more details.
I was sticking with quantum Canadian researchers
have published a paper claiming to have developed
an alternative to shores algorithm
that would require substantially fewer qubits
to crack RSA 2048 from approximately 1 million qubits
using shores algorithm to the low thousands.
They also estimate that theoretically
the new algorithm could take just minutes
to crack RSA 2048 versus years for shores algorithm.
Their method blends the use of both classical
and quantum computers to achieve its results.
And I just want to say at the outset here,
these researchers are not claiming they have cracked RSA 2048.
They're saying their algorithm based on the tests
that they've done could potentially someday
be faster at it than shores.
So leave it to Drew to give me something
that I actually had to do research on.
This was kind of fascinating.
This paper was cool.
Basically what they're doing is they're saying
shores algorithm relies on a furrier transform, right?
So there's a lot of compute power that has to go into that.
And they're essentially saying that they found a shortcut
because what they're doing is they are looking
for instead of doing processing over a series
of complex numbers, they are actually looking
in finite fields for modular arithmetic.
So basically they're saying we're seeing patterns
and we're kind of making sure like we're working on those.
And the other thing that people have to understand
is there are huge quantum computers out there.
Like every day they're just getting more and more powerful.
And that's what the qubit number is.
But the problem is it's not just the number of qubits
that the computer is capable of producing.
It's the amount of noise that comes out of it
because there's a lot of videos out there you can watch on it.
But basically what happens is when you're trying to find
the state of a single qubit,
all of the other states are noise
that consume other qubits.
And through the magic of math,
whatever you're looking for actually
ends up being the right answer if you know which qubit to look at.
You've just got to filter all of the noise.
And that's why we haven't cracked RSA yet
is because the amount of noise that's generated
as you continually add qubits increases,
not exponentially, but it increases in a similar fashion.
And so we have to get better at developing software
to filter that noise.
What they're doing here with this particular algorithm
is basically saying,
yeah, but what if we didn't have to worry about the noise
because we found a shortcut in the math itself?
And it may not sound like much, right?
I think one of the things that quoted in the paper
was there was like a, in theory,
there was like a 33% reduction.
But in practice, it was closer like 40%.
And you're like, oh, well, that doesn't sound like a whole lot.
Yeah, well, theoretically, in order to crack RSA,
you need effectively a million qubit computer.
If you can reduce that to 600,000 qubits,
that's a pretty big reduction.
And when you consider the amount of power
and things like liquid nitrogen that it takes to do this,
because yes, we have to keep these things stupidly cold right now.
Well, some of that depends on the type of quantum computer, but yes.
And that's true.
And that's one of the reasons why we're starting to see this approach
with a hybrid classical and quantum computing
is to allow the quantum computer to do things that's really good at
and not have to consume resources to do everything.
And I think that that's where we're going to see
the biggest breakthroughs to start with
is quantum computers are going to be kind of used
almost like a deep thinking core to do these kinds of things,
to instantly solve these factorization problems.
But then it's going to shovel that data back
to a classical computing core, like a cluster
to do the actual work of implementing it.
Because otherwise, I mean, we're already seeing this right now
with AI data centers, right?
Like, we can't get enough resources
to run these things the way that people want them to be run.
And that's a quote unquote new problem.
Quantum computing, we've known about this for decades
because we know what it takes to run a quantum computer.
And even with the advances that we've been making in the science,
it's still really hard.
Yeah.
Yeah, their method did blend the use of both classical
and quantum computers to achieve their results,
which I think also happens when you're using
Shore's algorithm to attack cryptography.
The researchers say they have tested this algorithm.
They call it JVG, which is the first initial
of each of the researchers' first names.
They did a simulation and they actually did use
a quantum hardware from IBM and they're testing.
Again, they didn't test it against RSA 2048,
but they did test it.
And they, at the paper notes quote,
on quantum hardware, JVG reduced the required runtime
from 67.8 seconds to two seconds
and the quantum gate counts by over 98%.
I should note that the paper has been released as a preprint,
which means the version we've linked to hasn't gone through peer review
or been published in a journal.
That is not an indicator one way or the other of the paper's quality.
I think preprints are a standard practice in academia.
I just want to make sure that listeners had that context.
And we were talking about quantum computers
with the number of qubits I was curious what's out there.
Based on the quick research I did,
Caltech is currently the winner.
They rolled out a quantum computer last September,
which supports 6,100 qubits.
All right, moving on, quick update from NVIDIA.
The company recently pledged to invest $100 billion
in OpenAI and how NVIDIA is walking that back.
It's capping its investment at $30 billion.
CEO Jensen Huang said it was because OpenAI is going public soon,
not for any other reason you might think.
There has been criticism of sort of like these circular deals
by NVIDIA and cloud giants.
They promised to invest billions in the AI companies,
and then the AI companies promised to buy billions
in products and services,
which has some people in the market being like,
wait a minute.
So Tom, does this feel like a walk back or a reality?
Is this a crack finally in the great infrastructure boom or?
I don't know if it's a crack,
but I think it's maybe admission
that just pouring money into these things
isn't going to be the end all.
Jensen's got a good point.
If they're going public, why do they need my money now?
Why can't they just buy things from me like normal companies do?
Because you bring up a really good point.
If I'm investing $100 billion in you for you to turn right around
and buy $100 billion with chips back to me,
then effectively it's a net zero.
Although to those people who pointed out,
this is not in-ron because this isn't fake money.
It's actually real money.
It's just, it's on a dump truck being driven back and forth
between open AI and NVIDIA all day long.
But the problem is that it's still a net zero.
And so I think Jensen's point is,
if you are about to go convince a whole bunch of investors
to invest in your company and give you lots of billions of dollars,
maybe it's time you give the actual billions of dollars
to me for the hardware that you need to make this work.
And I think that he realizes that if he doesn't scale back now,
then they're just going to create this feedback loop.
And this is in a way, it's like weaning a calf, right?
Like I've got to give you a little less to see if you can make it.
And if you do, to extend the analogy,
I'm fattening you up for the slaughter later.
Because, you know, well, with all the announcements
that we've seen this week with open AI
making deals with the federal government,
I'm sure Jensen's like, hey, the government's got money.
I'll take theirs. It spends just the same as yours.
Yes, for sure.
All right, moving on this week, Accenture.
Greed to Choir.
Ucola from ZIF Davis.
Ucola owns the popular speed test website,
as well as wireless design company.
Echahau was sold for approximately 1.2 billion.
The organization's going to be folded into the connectivity
division of the consulting firm Accenture
announced that they will target Ucola services
for end-to-end network intelligence services
for AI-based transformations.
Tom, you brought this story.
And I didn't think of Accenture as like a product
and services company.
I thought it was more of a consulting company.
So I think this is odd.
But maybe I need to update my mental model of Accenture.
So it's funny because this actually came out earlier this week.
And I was talking to some of my Wi-Fi friends,
like Keith Parsons, host of heavy wireless.
And we were debating this back and forth.
And one of the things that came up was,
why did they pay so much money for this company?
And one of the things was, when you think about
what they are getting from Accahau,
which is a component of Ucola, which is now part of them,
they have a lot of data on Wi-Fi installs.
And there's some questions around,
well, who actually owns that?
And I think there was some clarifications
in the last couple of years in their user agreements
of exactly who owns the data and all that other stuff
that fits up with it to the cloud.
You know, that's going to be something valuable for Accenture
because what they're going to be able to see
is the quality of Wi-Fi deployments out there.
But the flip side of this,
and this is something that I was thinking through
as people were bringing up another product
that Ucola has come out with in the last year.
It's an assurance system for hotels or other large venues.
So basically what it is, you go in and Ucola will sell you
a service that will allow you to attest
that the Wi-Fi in the hotel is good.
Like, you know, three, four, five stars kind of thing.
Right.
But when you think about it,
like when people go shopping for a hotel now,
yeah, I'm looking for a spa, I'm looking for a pool,
but if you're a tech person,
I'm looking for good Wi-Fi.
I'm looking for good connectivity.
And what better way to do that
than have a third party offer that as well,
you know, once a year we'll test it to make sure
it's working or in Ucola's case.
What they would probably do is they would put a speed test server
on-site and they would use it to constantly monitor
that system because it doesn't cost them any extra.
And then they can update that in real time, right?
What better way to get into a consulting company,
like Accenture, than to provide a service
that they can add on to your yearly contract
and basically make all the money back
that they had for doing that.
But now that I have all of this data
about the Wi-Fi installs,
now I can go back and sell you a service
and say, hey, you know what?
I can make that connectivity even better for you
so you can go from three to four or four to five,
which is something that I can prove over the time
that I have all of this extra data
about how many more visitors you get
when your Wi-Fi is better.
And that will not only offset the cost of the upgrade,
which we will happily sell you,
but the cost of the service to keep it maintained all year long.
And someone in Accenture is probably like big dollar signs
flashing in their eyes
because of how great of an idea this is.
Yeah, 1.2 billion is a surprising number
for this portfolio, again, going to Accenture.
But I guess I can see the argument for it.
All right, we'll finish up with some financial results.
We'll start with Broadcom.
They announced results for their Q1, 2026.
They are definitely writing the AI infrastructure wave.
Revenues were a record 19.3 billion,
up 29% versus this time last year
with net income of 7.3 billion, up 34%.
The company specifically called out
its AI-related semiconductor revenue,
which was 8.4 billion,
up more than 100% since last year.
Broadcom software business,
which includes VMware and Symantec earned 6.8 billion,
which is a nice number,
but it was only up 1%
versus this time last year.
So you can see where all the energy is.
You remember when we thought that hardware was dead
and software was the key to making all that money?
All we needed to do was create
a paradigm shift and all of the sudden,
now everybody wants semiconductors
and that's what we're seeing everywhere, right?
Is hardware is selling as fast as they can get it cut off the die
because of the potential of where we're at?
Like that's the best kind of hardware sale
to these companies because even if you don't actually end up using it,
I have sold it to you.
And so I think that that's one of the things
that we're seeing is that Broadcom really understands
that they have a mint sitting underneath them
because it's not just the chips that are being bought
for GPUs and CPUs.
It's the network infrastructure that runs it
and who supplies most of the chips
for that network infrastructure?
That would be Broadcom, yep.
Yep.
Yeah, and for the next quarter,
the estimates also look great.
Broadcom's forecasting 10 billion in AI semiconductor revenue
alone for Q2,
as well as a new record for their total revenue.
So the tide has not dropped here.
It is lifting other boats as well.
At least those boats in the semiconductor business
Marvell reported Q4 and full year fiscal 2026 results
for Q4 revenues were 2.2 billion
with net income of 396.1 million.
For the full year, Marvell had revenues of 8.1 billion,
which is a record.
Up 42% year over year,
they brought in net income of 2.6 billion.
Marvell is also forecasting strong revenues
for its next quarter.
So Marvell also enjoying that.
We need to buy all of the chips everywhere mindset.
Yeah, I think that this is actually really good news for Marvell.
I mean, they are a part of NVIDIA.
So they are getting a lot of that kind of halo effect.
But historically, Marvell has been seen
to do a lot of infiniband work.
And I think that as we're starting to see the shift inside of NVIDIA
going over to Spectrum X,
I mean, obviously, those are Marvell DPUs
and they're using a lot of Marvell semiconductors.
Like, that is the thing.
If you buy the package from Jensen,
it includes all of the stuff that is made by in-house companies.
It just makes sense that if people are buying tons of GPUs
and tons of servers,
they're also going to be buying tons of networking equipment
and storage equipment on the back end and make it all work.
And why not sell as much as she can
before people figure out what's going on?
Yeah, exactly.
All right, our last start for the day is a crowd strike.
They are security vendor.
They reported Q4 and fiscal year 2026 results.
For Q4, revenue was 1.31 billion,
up 23% net income was
38.7 million.
For the full year, crowd strike had revenue
of 4.81 billion, up 22%
however the company lost money for the year,
posting a net loss of 162.5 million.
I will note that crowd strikes
spent more than a billion dollars on acquisitions
just in January of this year.
They bought a browser security company
and an identity security company.
So they, I think, are willing to lose some money
in a bid to grow into new markets.
I think that what's happening
with a lot of these companies that had name recognition
is that they are trying to diversify across all these markets
in order to be able to pick up additional service offerings
because a lot of companies will be willing to say,
well, I know crowd strike and I trust them.
Relatively trust them after the whole kerfuffle
a couple years ago.
But I'm going to buy the whole suite from them, right?
Because I don't want to do this piecemeal
and have to worry about tool integration and things like that.
But the flip side of that that we've also seen
is that other companies that have a similar offering
maybe not necessarily in security
are now buying into security companies
to provide that one that popped up a few weeks ago with service now.
Like they made a play for security companies
so that they can start offering security services
on top of everything else they do.
So I think this market's going to get very crowded
and a lot of the smaller players
are going to find themselves getting bought out
to be features in a larger service set.
I mean, we saw that with Zscaler and SquareX recently
where Zscaler purchased SquareX
to offer browser security
as a part of what they were doing.
That's something Drew, you and I talked about
on a recent tech field they podcast episode
about how that seems to be a new market
that people are going to get into.
So I wonder what's going to happen
with all the other companies that are kind of playing
in that market if, you know,
company like CrowdStrike are going to say,
you know what, it's time for me to pick this up
because it's an offering that I can add,
I don't know, $2 per seat to the bottom line.
Yeah, and browser security in particular
is becoming a hot market in the security space
in CrowdStrike, you know, sort of specializing
in endpoint protection.
I assume they feel like the browser
is kind of our territory.
So we can't, you know, let somebody else
with a secure browser get onto a platform
that we're also protecting.
So it makes sense that they would want to go that direction.
All right, links to all the stories
with COVID will be in the show notes
that accompany this podcast.
That does wrap up the new section.
Tom, working folks, find you online.
The easiest place to find me is techfilterlay.com.
It's all the stuff that we do for all of the fun going on.
I also do a couple of other podcasts
to techfilterlay.com podcasts,
like I mentioned, security bill of our podcast.
But of course, if you want to see my ramblings
about all the other random stuff that I do,
networking.net is my blog.
I'm trying to put some more stuff out there
for people to enjoy and possibly even get mad at
and leave funny comments.
Absolutely, it's a great blog.
We feature your posts often in our human
infrastructure newsletters to go check it out.
I'm Drew Conner-Mari.
I'm on Blue Sky at Drew C.M.
and I'm bloggingapackapprosters.net.
Please do stick around
for our sponsored techbites conversation
with StatSeeker.
They are a network intelligence platform
and you can learn about what they're up to
because that's coming right up.
Today on the TechBites podcast,
we hear from StatSeeker.
This is a network monitoring company
that collects high fidelity network data
to help engineers that administer
to get visibility into physical, virtual
and logical interfaces
to find problems faster,
understand root causes
and spot behaviors and anomalies
so that you can prevent problems
instead of just reacting to them.
We're going to explore how StatSeeker works,
here customer use cases,
and find out how StatSeeker differentiates itself
from other network monitoring products.
Our guests are Dylan Hensler,
customer solution specialist
and Andrew Greenlaw,
technical account manager.
Dylan and Andrew, welcome to the podcast.
And Andrew, we'll start your first.
Just give us a quick overview of StatSeeker.
Thanks, Drew.
So StatSeeker is a self-hosted platform
for monitoring critical networks,
some performance metrics,
and events like you spoke about.
It's really around real-time troubleshooting
but then also that piece around
the historical analysis,
long-term reporting that StatSeeker is kind of known for.
It's an on-premise solution.
So there's a lot of flexibility options
for how it can be hosted.
We've got customers that still have this
in physical and virtual environments,
but then have moved into more Azure,
AWS, Google Cloud,
we've even had customers
pushed us into Nutanix and things like that.
So it really kind of gives customers
the option to place this where they want
and kind of control
and manage that data themselves.
You know, not bound by a SAS solution.
This is their data.
So it doesn't need an army of pollers.
It doesn't need separate databases
that you're managing.
Everything comes completely housed inside StatSeeker.
So it really is a smart,
highly efficient data collector
for a range of different network sizes.
So we go up to the super large,
but we also play in the small
and medium-sized networks as well.
Well, Andrew, StatSeeker has been around a long time.
I mean, we've done work with you
as packet pushers going back.
Maybe a decade.
It was you've been around a long time.
But the thing I remember about StatSeeker
from back in the day
was you guys could ingest
a stupid, ridiculous, crazy amount of telemetry
and handle it.
And like you were saying,
without having to have 50 pollers
out there to be able to scale,
is that still the thing?
Yeah, that's still very much kind of who we are
and what we do.
And you say a decade,
but we've been kind of around
in this kind of space
for past 25 years now.
You know, this is our kind of bread
and butter network performance monitoring.
We've kind of seen a lot of change
in the environment
and the kind of the evolution of networks.
But definitely that whole value
around data, data collection,
large, big data play.
That's kind of who StatSeeker is
and that's very much kind of what we do.
Regardless of kind of the change
in, you know,
be where the data sits and things like that.
You know, so, yep, definitely.
Let's dive into what kind of data
are we talking about.
Foundation of what StatSeeker is looking at
is just your classic network metric
device and interface performance
and health stats.
We're primarily collecting those
over SNMP.
And we're also doing reachability
and event and latency signals
via ICMP.
So very open protocols.
We tried which will just support
almost any network device out there.
For discovery, it's all IP range
driven and then configuration
and inventory details
give you refreshed over time
through your scheduled re-walk.
So the system is going to stay aligned
with what's actually
on your network automatically.
That's a key thing for us.
So you set StatSeeker up and it
run a lot of it can be automated
in the background and it maintains itself.
There's not a lot of day-to-day maintenance
or configuration to do.
On top of that, StatSeeker can work
with event data like CISLOG,
SNMP traps for alerting and troubleshooting.
It also supports integrations
where the data source is a vendor API.
CISCO ACI or Marockie integration
are good examples of that model.
And then if you have specialized
devices or metrics,
we do have what we call custom data types
to let your extend what can be pulled
and reported.
Okay, so you're bringing in data
from everything and you're doing it
in a modern way.
You can pull in data,
like you said, VA API,
if that's what I want.
You can grab my data that way too.
Okay, so specialized integrations then,
like you mentioned CISCO ACI and Marockie,
does that mean like yeah,
we did some work to make sure
that I can model that specific
sort of network device?
100% we've got both of those
fully built out in the product
and we ship with built-in dashboards for those.
So you connect StatSeeker to your API key for those
and you're up and running within minutes.
So there's really not a lot of configuration to do.
You can, we're very flexible,
but there's not a lot of configuration
to do out of the box.
But if you really want to get narrowed down
to a specific picture,
we can do that as well.
How often are you collecting this data?
Of, or SNMP, we're pulling every minute
for the API integrations.
We're pulling every five minutes.
And just the big thing there is
that we store those five minute
or one minute pulls indefinitely
as long as you've got the space for it on your server.
So we don't roll anything up.
We don't average it out.
So you never lose granularity?
You never lose granularity.
Yeah, that's a big part of what we do.
So you can go back months
and see exactly what happened in real time
for any of your devices or interfaces.
A lot of these platforms,
even like Muraki, for example,
you know, they're doing a huge amount
of heavy lifting around configuration
and there's a monitoring component.
But they very much push you to the API
and things like that.
If you want to start doing kind of more analysis
and extrapolation, things like that.
And so that kind of integration really kind of does well
with how we then collect and store the data.
And then you can kind of slice and dice it however you want.
So, I mean, typically we think
of the network monitoring is on,
because we care about our physical devices,
those things we've lovingly racked up.
But I'm going to have stuff all over the place,
things in the cloud and so on.
I'm going to guess you can monitor all the things, yeah?
Yes.
As long as the device has,
we have an API integration for it
or we can do SNMP.
We have some way of talking to that device.
Yes, we can.
And then we can integrate it with our databases.
So you can really put all your data together
from different sources.
So remote distributed environment,
wherever I don't have to think hard about this.
Sure.
So you can play it on-prem as a VM in the cloud environments.
We do have observability appliances
for distributed networks.
That appliance can run monitoring services
from a remote location.
Example services would be ping-pulling,
ping-only discovery, things like that.
And then we just forward the results back
to your central sat-seeker server.
And it combines it all together into one view.
So you get the reporting and dashboards
from various sources distributed around your network.
So centralized data store,
but I can have observability out it.
My distributed edges would be-
Yep.
And that really lets you get consistent visibility
across sites.
And then you don't have to redesign your monitoring
around lots of separate tools that way.
Okay.
So once you're gathering this data,
what are you doing with it?
What am I as a customer looking at?
Sure.
So it is a huge amount of data
just because we're pulling so often
and pulling in so much.
And then that doesn't really do you any good
if you don't have anything to do with it.
So there's two big things.
It turns data into answers right now
and evidence for later.
In the moment,
all that data is driving dashboards, reports, alerts.
So your team can see what's changing
and respond quickly.
And then over time,
because we've got all that historical data
at full resolution that we were talking about,
you can replay incidents,
do root cause analysis without losing detail
to averaging a roll-ups or aggregation.
Then there's also the proactive side.
Statsheaker's reporting engine
uses historical baselines to model
typical behavior.
So that supports forecasting, trend analysis.
And that really allows you to spot bottlenecks or shifts
before they turn in outages.
Give me an example of a dashboard.
How would I construct Statsheaker dashboard?
Maybe like a typical customer
what they do with a dashboard
that you've seen be useful?
Sure.
Our dashboards are all GUI-based.
It's very similar to Grafana
if you're familiar with Grafana.
So we offer a full library of pre-belt panels
for most of the network metrics.
So a good example would be
utilization on your receive and transmit side.
You can select a few interfaces,
select the panels we've already got built for you
and drop those onto a dashboard,
specify your time frame, what devices you want to look like.
Statsheaker will start graphing that immediately
and then you can drill down,
get as a specificer as vast as you want.
And if we don't have a built-in panel,
though we do have hundreds of them
for all the most common use cases,
we do a lot of really advanced queries
so you can design your own panels,
HTML support, you can go to outside data sources,
really, really get in there
and build exactly what you want
based on all the data we're saving for you.
That's it, exactly what I want.
That's it.
So depending on what environment I'm in,
I'm going to care about certain things
and not care about other things.
And there's certain metrics I might be really important
to me and other things in other environments
that maybe I don't care about.
But that you just said the magic words for me.
I can build what I want.
Exactly.
And if you don't have the pre-built panel,
I can feed that data in.
Are you going to be able to get that and graph it for me?
Super customized, Lewis.
So can you walk us through a couple of customer use cases?
You know, we've got a US retailer
who has a very large environment,
10,000 stores across the country.
So a lot of challenges.
They wanted to map their entire environment.
So you can kind of plot flat longs
against devices in Statsika.
And we were able to kind of work with them
and create a proper 50,000 foot map view
of their entire environment.
It was fully dynamic.
They could go right to the store level if they wanted.
And they plotted every critical device,
including their primary and secondary
router at that location.
And they could see for their management users,
for their network team,
things that were happening in real time,
especially things like weather events,
power outages, as well as planned firmware,
kind of rollouts and changes that were impacting stores.
And in addition to seeing it kind of happen in real time and live,
they could then use that data historically
and look at all which stores were impacted for the longest
when did services come back and restore?
And they could then kind of use that information
for future kind of planned changes
and another seasonal reporting.
I think I'm Andrew here.
So you just said 10,000 stores.
This is a huge retail environment, right?
Yeah, correct.
Yeah, this is okay.
We've got some of these customers, yeah.
And this is not just red light green light.
I heard a lot more kind of information
that I can pull in here.
So I shouldn't be thinking of this as like,
oh, you tell me when things are down.
There's a lot more going on here.
Yeah, correct.
Because we kind of have a lot of flexibility
in how you choose,
you as a customer choose what metrics you want
to kind of focus on and be a kind of a priority.
Yeah, stepping away from just the traditional up, down,
you can actually be specific about maybe looking around
trip time or IPSLA or health metrics
or even I've worked with some customers on an aggregate of those
that kind of creates a bit of a health score.
So there's kind of flexibility to do that type of thing
right within the platform.
And you even mentioned outside events that aren't
necessarily rated to network performance like weather.
Like I'm from the US East Coast
and we just got hit with a couple of big storms
over the past month that did impact like shipping
and delivery and the ability to get stuff
to people and all that.
And I can imagine that could be useful information
particularly in a retail environment.
I've had customers that have their network environment
on a map kind of side by side
with their weather events as well as the information
that comes through HTML panel
from their different power suppliers
that actually gives them real-time information
around outages that are occurring
that impact parts of a city.
So you can correlate a whole bunch of really nice kind of
pieces of data alongside that helps to kind of make
a quicker decision for your network team
and it also helps with things like management
to just kind of calm a management user to say
this is fine, this isn't a network issue.
This is kind of something that is unrelated to us.
Don't stress.
Yeah, that mean time being essence
by just putting like, there's six feet of snow.
So, and the power's at.
Oh.
Yeah.
So there's lots of network monitoring
and network performance monitoring products out there.
How would you differentiate that, Taker?
So there's a lot of tools out there.
There's open source solutions.
You know, every vendor now has a platform.
So there is a lot of options out there.
I guess going back to that point
around what our DNA is.
It's really around data quality.
The efficiency in how we collect kind of and store data
and then the focus of what Statsik is all about.
So accurate data for us is everything.
You know, we value data kind of quite a lot
and we value our customers data a lot.
So that 60 second poll cycle that is kept unavvaged,
that's that's huge for our customers.
And that's why some of them have had Statsik for 15, 20 years
because they go out to the market.
They look for a solution that can replicate
what Statsik is doing for that kind of source of truth.
And it's not there.
You know, they can't replicate that, especially at scale.
You know, with this big data collector,
highly efficient platform
that doesn't hurt users by scaling up.
You know, so we don't have to have lots of heavy tuning
and complicated licensing to kind of make Statsik work.
So by that, in its very nature, you know,
that's kind of one of the differentiators
that kind of makes it an administration,
simpler kind of solution,
and also a lower total cost of ownership solution.
And one of the important pieces that there's a lot of solutions out there
and Statsik is not your, not replacing your seam.
It's not trying to be your ticketing tool.
It's not trying to do everything.
It's very focused for network teams
and networks on creating
and providing clean, accurate metrics
that can actually power proper decision-making kind of processes.
And that's why even this morning,
I'm on a call with another customer
and they're just saying,
Statsik is our source of truth.
We can refer to it, we can rely on it.
It's trusted within a team
and they can then use that information
to kind of drive what they're doing during a day,
a week month, a whole year, for example.
Now, you mentioned along the way licensing could talk about that.
So, okay, how is the product priced?
Great question, Ethan.
But everybody wants...
I mean, obviously.
That's why we'll put it at the end.
Look, if your listeners take nothing away from this,
Statsik is deliberately uncomplicated.
It's a very kind of focused solution
on doing what it does in the network,
monitoring space really well.
And we extend that simplicity to our pricing.
It's a per-instant subscription with a device count.
So, how many devices you want to monitor across your network?
That's what makes up your kind of licensing model.
So, that starts from $6,500 for 250 devices
and then scales up there into what I spoke about
before, tens of thousands of devices
from a single Statsik, or instance.
And what about...
Are there add-ons, or especially,
specialized modules?
Because you mentioned integrations
with other platforms and products.
Yeah, then the kind of modules
that we spoke about, things like Cisco and Maraki,
Cisco ACI,
and then also our observability appliance
for that distributed remote polling.
Those are kind of add-ons
if those relate to your technology and your environment
or the architecture of what you need.
So, we can kind of bolt those on
for additional cost
in and around that licensing model.
Okay, but $6,500 for 250 devices
out of the gate and then you scale from there.
Correct, yeah, correct.
So, it's not there to blow up your IT budget.
It's there to really...
In a lot of ways, complement,
but be this powerful solution
that can really scale properly with you.
I think I know what you mean by devices,
but just to qualify,
because I know a lot of network-moderating platforms
are prized by device.
You mean a device that's got a switch with 48 ports.
You're not building me per port.
Yeah, correct, yeah.
Like, there'll be a device
and it'll come with a certain number of interfaces.
But what is kind of important, I guess,
is that device count.
So, device count.
Okay, yeah.
Not the element count.
That's the way it's built on some other platforms.
The element and an interface is an element.
That's not what you're doing.
So, we mean natural devices.
Yes, okay.
Yeah, nice.
So, no, there's no sensors.
There's no kind of...
There's no flows.
You know, we're trying to keep it kind of simple.
And it is.
So, the other big question
then besides pricing is,
can listeners play with it?
Look at it.
Try it for themselves.
100%.
That is what I always recommend people do.
We offer a 30-day free trial.
It's fully featured.
So, we're giving you everything in the core product
when you do that trial.
And it's just like everything else we talked about today.
It's super simple to set up.
There's no credit card.
You don't have to get on a sales call.
And the trial is set up to show you value very quickly.
You can deploy and see live data fast.
Then you can jump into those pre-made dashboards
I was talking about to find patterns, gaps in your capacity
and just start producing actionable reporting right away.
Usually most users can be up and running
within a couple of hours.
Install it.
You run a discovery to find your network devices,
putting your credentials.
And it's pretty much up and running from there.
And then you can really drill in.
So, yeah, that's a good way for listeners
to validate in their own environment.
Whether it's a good fit for them
and how quickly they can detect and explain issues
going forward once they have stats.
Seeker is part of their daily process.
You guys must like engineers.
No credit card in those sales call.
It's like it's almost like you understand how engineers think.
I don't want to talk to anybody.
Let me try the thing.
Please just lean me alone until I'm ready.
100%.
I definitely come from the engineering side before.
I spent a long time building networks
and I was a stats-eeker customer for a long time.
So I know I know that side of it very well.
All right, well, we're at time,
but hopefully we've wet people's appetites to go
and explore this for themselves.
If they want to go find out more
or get the download, where should they go?
statseeker.com slash net ops has everything you need on there.
Just check out that website.
You can sign up for the trial.
Get some more information on our features,
system requirements, all that.
It's all documented statseeker.com slash net ops.
All right, that's statseeker.com slash net ops.
We'll also have that link in the show notes
that accompany this podcast.
And if you do reach out to statseeker,
we'd love it if you let them know
that the pack of pushers sent them your way,
because that helps us.
Because sponsors do make what the pack of pushers do possible,
so we can offer high quality,
deeply technical content for your professional development for free.
That includes more than a dozen technical podcasts
on networking security,
IPv6 DevOps and more.
We've also got an industry blog
two weekly newsletters, our community Slack group,
a YouTube channel and IRC group.
You can find it all at packopushets.net.
All free, no login required.
You can hear us on Spotify,
find us on LinkedIn and read us on our podcast.
And last but not least, remember that.
Too much networking would never be enough.
