Loading...
Loading...

Hey, everybody. Welcome to Packer Protector, the podcast at the intersection of security
and networking. I'm Drew Connry-Mari, JJ's away. On today's show, we're going to talk about
agentech AI and agentech AI security issues with my guest, Kyler Middleton. And I wanted to talk
to Kyler because she's actually building and using AI agents in her work as a developer in
the healthcare sector. She's been documenting and sharing her journey of building and running
agents in a long-running blog series that you can find on her site. Let's do DevOps. Kyler is
a DevOps lead and software engineer and Azure MVP and AWS community builder and co-host of
the day two DevOps podcast right here on the Packer Pusher's podcast network as if you didn't have
enough to do already Kyler. So thank you for that. Totally. So some background. I think you
started with an AI bot as opposed to an AI agent when we can talk about those differences first,
but I just want to get your sense of why did you want to build a bot? What was it for?
Totally. That's a wonderful way to start because it's a weird story. I work in health care
in the United States. I build an EHR and help maintain an EHR that the company, my day job.
Aside from all that stuff, you just listed that it happens in the cracks.
And it was the admin of CADTBT a couple of years ago and everyone was using it for everything.
We were starting to learn that this wasn't just a flash in the pan. This is actually
something that could be useful. And at the time, it wasn't the smartest thing in the world,
but it was very good for translating and summarization and stuff like that. So a lot of our
employees were uploading private documents to CADTBT to help them translate
even like foreign language transactions. Absolutely. It was like really good English language.
Like if that's English as your second language, like why not have an AI run past it?
However, we didn't have an enterprise contract. I don't even know if they were offered at that
time. So CADTBT, OpenAI was just reading all of our internal documents, which in health care
in the United States is potentially like a legal for us to do. And InfoSec was worried about that,
but also more that someone would accidentally upload something very sensitive and we would have to
go figure it out. We would be an event on the newspaper. So they approached me and said,
hey, this is a problem. But we have a pretty lovely culture here, I think, that's focused on
instead of just say no, make a paved path. What's the alternative? If we're going to block this
tool, what do we do to say like you can still do this thing in a different way? So leadership
realized the reason people are using this is because it's helping them. So instead of just
throwing down the ban hammer, they said, you call that a paved path, which I like, like let's
find a way to make this easy but acceptable. Totally. Absolutely. And I love that definition for
the platform team. You shouldn't just be setting the rules and saying figure it out. If you're doing
that, that's the wrong way to approach platform teams. You should be building paved path roads that
are easier than figuring out themselves. And so this was one of those. Security instead of saying,
don't do it, said, can we build something better? And said, like, can we build an internal chat GPT?
And I of course said, no, that's taken when so many bearings of there's no way we can do that.
But there is started thinking about it. And you know, these hyperscaler clouds, AWS, Azure,
GCP, they all have endpoints for service models. And you just interact with it as an API call.
You submit a conversation and you get a response. And I thought, you know, I could probably do that.
And I could probably write that with Python. And if I had a Slack app that you could tag,
it could even live in Slack. We wouldn't have to build a front page or a website or something. And it
could, you know, be in teams and be in Slack and have access to our internal data. And so I just
spend some time building it. When it got to a sufficiently complex point, it started helping me
build itself, which was a real evolution. How do I get you to do this? How do I get you to format
messages? And eventually we gave it access to knowledge bases on several different platforms,
Confluence, SharePoint, a corpus of around 110 PDFs from legal that, you know, you're required
reading, but no one does because it's 100 PDFs. Yeah. And when you say we gave it access,
did you just borrow a model? What model we're using? Where did you get it?
I have always liked the anthropic models. I think when I launched it was on it three or
3.5. And we're using an AWS service called the conversation or a Converse API that sort of
is a Met API. And that sounds very fancy. But it just means it's a standardized API front end
and you specify the model you want. And they do all the translating for the specifics of whatever
API for that model, how that works. Okay. And it just took off. A, because it was the only one
permitted and you're in a, we're in a wild garden. You're of course going to look in the wild
garden and be because this model was accessible where users are in Slack or Teams. And now we've
launched an email one internally, but also because it knows all about our company because we've
used knowledge bases to train on all of our data. So not only can you say like, if I'm the
exchange admin, what would I do to create a distribution list? You can say like that question
and it'll say, well, you don't need to do that. Talk to Dan. He's the guy that runs exchange.
Like, go talk, why are you talking to me? Talk to Dan. And that kind of training that like hyper
specific training on your company just proved to be this really wonderful model. And when I realized
that I started open sourcing it as much as I can. So it's all in my two open source. I don't know
why they keep letting me do that because I have spent years brightening it. But they do for you
should take advantage before it gets taken down. Okay. So you intended this bot then from the
outset to be used by other people besides yourself and the organizations. Yeah. Absolutely.
Particularly customer service support folks where the English is your second language because
A is always been really good at summarization. It's starting to get good at programming and
software engineering ish, but it's really good at summarization. And so yeah, from the onset,
I decided that it would be, I want it to be really smooth and easy. And I didn't want to pay for
anything. And when you're using these models, you're paying for tokens. It's consumption models.
And all these public ones are seated. You're paying 10 or 15 or 20 dollars per user per month.
Even if they don't use it, which feels like a crazy model to me. So this is just literally paying
for when people use the model. And when it was Gen AI, when it was the bot mode and not the
agentech agent mode, it was about a penny per question. And so you would have to ask thousands
and thousands, you'd have to ask 10,000 questions a month for it to be 10 dollars. And like,
is that not incredible? It's much cheaper. It's dramatically cheaper. Oh, that's interesting.
So was security a concern and what kind of controls did you put in place or think about at least
as you built out the spot? Absolutely. From the onset, we had to make it as secure as possible.
So security means a whole bunch of stuff in this space. And it's still evolving. We wanted to be
private and not leak data for you personally, but also for the company. So our instance is not
exposed outside. This is not a business I launched. It's just a Python program that's launched in
a Lambda. And it's now moved to agent core when it moved to agentech. But of course, we wanted to
be as private as possible. So everything is secure and validated. And there's strong TLS throughout
the whole thing. And more than that, it's it's self-hosted. There's no way for like an open AI
to start reading our data because it's internal. It's in AWS. We have a BIS and an AWS instance that you
control that we control that I manage and I can go delete if I want to and no one's like looking
over my shoulder, which I think is pretty cool and pretty rare in the world of AI today. The first
party AI folks really want you to let them do it and maybe pay them a little, but they really
want to train on your data. So we had to make sure of that when we finally because the front end for
this is either Slack or Teams, you're sort of drafting on whatever authentication and access
roles are already built into those platforms. Absolutely. We were already there. So folks interact
with it just like a normal user, you invite the bot into your channel and a thread and you can
interact with it. And one of the cool side benefits was people started having a dual sorts of stuff
I didn't expect. Like, you know, two people have argued and it's 57 messages in this Slack thread
and say, hey, bot, can you like summarize where we're at with this and make a recommendation?
And I didn't expect that to happen, but it did and the bot will read the entire 58 messages and
summarize it for you and take a position if you want it to. So there's a multi user AI.
We've had for now a couple of years, but we're starting to see it enter the prime time like
cloud code work is now available where you can share an AI context window and chat together.
That still hasn't really taken off in the private sphere, but it is powerful. I'm telling you. So
it's worth exploring. Okay. So you had this bot. It was being adopted. Seems like the uptake was
really great. What made you think I should agentize this or whatever? Totally. I really wasn't
sold on it for a long time because it does what we mean. It helps people find data. It helps
answer questions about legal and HR and stuff like that. But then I started to get pitched on this
idea of correlating between platforms or taking an action. Both of those things involve thinking,
thinking in the way that a human went to find it where you think of a question and you go find out,
you use a tool in some way. You go check it and then you come back and you think about the result
and maybe you do something else. Gen A.I. is not capable of that. The Gen A.I. what you're calling a
bot is just a single transaction, a single conversation turn. It reads a response. That's it.
Agenetic agents have the ability to utilize tools. So I started investigating what tools are
available. I found just this incredible birth of public tools. Now we're integrated with
PagerDuty, Confluence, Atlassian, Jira, SharePoint, Atlin. There's more than I'm forgetting.
Brandeck, it's just absurd what you can have this do. When it can reason through things,
hey, go find all of the Rundeck jobs that might help me recover my web server. Is this,
oh, here are the six ones. I can click this one and jump right to it. That's amazing.
And that's still just fully read only mode. We've evolved it even past there recently.
But it's amazing what you're able to do with these agentic models that can think. It's
much more expensive. Each transaction costs about 60 cents. So 60 times more expensive on average.
But people are also asking way more complicated questions. For my team of users,
go look at all their Jira tickets for the past quarter and help me measure their closure rates
and how quickly they resolve tickets at priority one. That's dozens of calls correlating data,
filling up context, launching sub-agents. 60 cents for a spilt report like that is
pretty powerful and pretty cheap, I think. So do you feel like you opened another can of warms
then on the security front in turning this into an agent that was going to interact with other
tools and systems and be widely used across the org again? Definitely. Agentico didn't really matter
from a security perspective at first. We launched tools in read only mode. So either using security
or like literally filtering, filtering the MCP tools that we give to the bot to only have like
get list, find, search, stuff like that that it's not update or post or delete.
I am. And just a backup then. So your agent was accessing third party tools and services via MCP,
the model context board. Yeah. I have also evolved it using some
just native strands tools. Strands is a library that lets you run agentic workloads that I love.
And for I'm sure we'll have links to all of the public get hubs. It's all written around that,
but it's all open source. The project spring from AWS, but it's open source in public.
Okay. So when it was just read only, I had everything operate as the service principal user,
the Vera user because that's what we call the bot internally. And who cares if the Vera user
reads a lot of gira, it doesn't matter. But once he starts to change stuff, restart a server,
run a run deck job, configure something on a server or close the ticket, it starts to matter.
That audit trail would be lost if we just empower bot to do it. Who asked bot to do it? I don't
know. I'd have to go check the logs on the other platform and breaking your auditing in that way
just wasn't acceptable in health care, but probably in most companies, right? Yeah. So we decided
to vend OAuth tokens. And there's a whole bunch of assumptions there that we can talk about.
But basically when you talk to the bot and you say, Hey, I want you to enter right mode,
where you can do stuff. It says, okay, cool. Authorize an OAuth token for the platform that you want
to work with. And I have a little portal. It's actually just a lambda that's running a website.
And you say, okay, I want to authorize Atlassian because I want you to update some gira tickets.
And you approve it. It's like an Atlassian app. And it stores the OAuth token in DynamoDB,
encrypted with the KMS key. I'm using a ton of acronyms that folks hopefully are aware of. But if not,
go Google them or go read writeups. I've written a ton about this. And so it will store the OAuth
token that says, like, you're, they're the Kyler human that's granted permission to this bot
to do stuff to do right access. And then you can have it operate in right mode. It understands that
if I have an OAuth token that's valid from this human user, and I'm correlating your identity
from Slack or Teams or email, something that we control access to, then you can use the token
to do things. And the audit trail will show it down as the Kyler human. Even though the bot is
actually doing stuff and writing tickets and taking action, it's using my OAuth token that I
granted. So it completes the circle on our auditing, which is pretty cool, I think. So OAuth is for
authorization. But where's the authentication step, either for a human user to enable your agent
to go and do things on its behalf or for the agent? In the bot. I actually, I taught the bot how to
generate an authentication URL, but it also hosts the website. It's a separate lambda, but it's
kind of packaged together in the bot. You talk to the bot in Slack or Teams or email and you say,
hey, I want you to do right mode. I want to update a juror ticket or something. And it'll say,
well, I don't have a token for you. Click this link. And in Slack, we send like a private link to
the user. Well, I guess my first question is, how does the bot, how is the bot assured that
the entity portraying itself as Kyler is actually Kyler? Is there any kind of authentication step
there? Or is it because you are accessing the bot through a system you've already authenticated
through the step we've taken? Yeah, exactly that. You're already in Slack, already in Teams.
You're using our enterprise SSO to prove your identity. Okay. And when you interact with the bot,
we see your username. That's part of the webhook when you interact with a Slack app or Teams
or whatever. And so we just use that first party trusted information that that is who you are.
Okay. And in terms of, you know, this token tokens can have their access sort of scoped. So
are you doing anything to limit scope or to tie scope to a user's role or identity in some way?
Yeah, it's configured as part of the application and part of the request for the OAuth token.
So both things have to kind of align for you to get the maximum amount of access.
And for the last scene example, specifically, it's read right to take it and read right to
confluence. I needed a couple of extra things for the JIRA service desk, which is something that is
used widely here, but also is used widely in a lot of places. And all of that's in my blog if you
want to read the specific stuff. I won't go over the specific OAuth grants, but suffice to say
once you configure the application on Atlassian and install it into your enterprise, which you'll
need your admin to click the little allow button. Users can grant an OAuth token to the bot as part
of that transaction. And the bot knows all about it because that's part of its system prompt.
So when it sees the OAuth token for you, it knows it can do right stuff. And if it doesn't see a
valid OAuth token, that's expired or you haven't used it before or something's changed.
It says, you know, I need you to grant it to me. Here's the link. And users can just click right
through. Okay. And have you set it up so that an agent can interact with multiple systems to
satisfy one request or a job like I need to go to Atlassian and JIRA and whatever XYZ to get
something done? Absolutely. And that is really powerful. That's sort of correlation among different
platforms. Particularly for like closing bugs, we can say, hey, I'm working on this JIRA ticket.
Can you help me sort it out? And I can say, oh, well, the function that you're looking for is
stored in GitHub, I can go fetch it from GitHub and parse that the logs are going to go to Splunk
and go to Splunk and fetch the logs. And then I can write the code. And so absolutely,
agentic mode natively, just when it's run agentic mode, it can do multiple steps with multiple
platforms. It doesn't have to pick one at a time. And so it can correlate data. It can pull
from different ones and think about it and decide I'm going to go to this other one too.
That's been amazing. It's been cool. It can absolutely get confused just like a human tan.
But it works pretty well at this point. Okay. Did you, I mean, I know you've put a lot of work
yourself into this project. Did you partner with other security folks with compliance folks? Like,
how did you decide on the architecture and the security implications of this architecture?
It's funny. This is such a multifaceted answer. I have designed it from the onset in my own
brain. And I just had an implement of it. And I have, I'm relatively senior in my company. So I
can sort of do that apologize later. I wouldn't recommend that if you're early career. But if you
bet somewhere a while, then I just started implementing stuff. And when people had questions,
I would answer them. And I think there's just this widely understood agreement that this is
a lesser evil for using AI. And a lot of companies feel this way that just we need to implement AI
to stay competitive. So I think I got a little more latitude than I would if I was implementing any
other kind of tool. Okay. So you, it sounds like had ongoing support from the executive side of the
house that provides some wind at your back to kind of move this project forward and
grace period if there are issues or problems. Yeah. And the major blockers would be maybe
information security. But they were the ones that sprung this process on me to ask for this
project to happen. And that helps a ton that that clears half the checkboxes right off the onset.
Can you talk about some of the concerns that the info sect team brought to you? Absolutely. We would
be vending OAuth tokens from users. What if someone stole all those tokens and they were able to
operate as our users and expel trade data or something? So the tokens are encrypted using KMS
tokens in a dynamo DB table. And they are only usable from the IP that they are granted to. So
where the bot is running. So someone would have to steal the tokens and also compromise the bot
to have it do stuff. And that would, it's just a higher barrier to do it. If you took the tokens,
they would be absolutely no use for you unless you're coming from that IP. Okay. But also we're just
people tell their AIs a lot of things, right? Some people are using them for therapy. Some people
are dating their AI. You say you like to invest or exactly. You shouldn't be doing that at work.
But we're logging and processing a lot of personal data and potentially a lot of maybe even PII type
data if people are uploading spreadsheets and saying create reports from this. So we had to make
sure that at each step of the data is either being operated on or in transit, it has to be strongly
encrypted. So really when I started this, it was just a widget. I started as an information as
an infrastructure engineer. And I, all I ever built is widgets. It's like a bash script is how
it started. And this became an application. This became a full-blown application. When you're in
lambdas context, every time you talk to the bot, it has a new instance and it operates and then it
dies. Agent Core is a monolith that operates on multiple messages concurrently. So there's even like
a single application that I have to monitor. I have to do logging. I have to do observability.
Security is writing to CloudWatch and using FireHose to send it to Splunk. It has just expanded
in scope to it. It is an application. I wouldn't recommend you write it from scratch yourself unless
you really deeply want to learn all the stuff that I learned, all the mistakes I made on the way.
But you can certainly clone it and build it yourself and ask why.
So you mentioned logging. Can you give a sense of how often those logs are being looked at,
where they're going, who's monitoring them, who has responsibility for that? Because I assume there are
things in there that folks might want to take a look at.
Yeah, absolutely. We are auditing a lot of activity like debug style events of this MCP work to
this MCP didn't authentication expired stuff like that to CloudTrail and we keep it for like 30
days or something like that. But we're also shipping it all the way over to Splunk and we have a
series of reports and alerts that look for things that are happening. And if we see those reports
or alerts, they're posted to Slack or Teams or wherever our SRE type team is available at.
And are those performance related or are you also looking for things like, hey, why is this
user or this agent trying to access the system or resource that maybe they haven't before?
We're so eager to keep it from crashing when it responds to things because it should be segmented
on a thread every time someone talks to it. But if it crashes the whole thing, we need to know about
that, which has happened. People upload like three gigabytes of a zip file and say unzip this.
And it's like obviously the bot can't do that. Come on. And also to see if people are trying to
avoid our guardrails. Guardrails as a concept are a filter on the way in the request that you send
to the robot and a filter on the way out the answer that the robot sends to you. And we use AWS
guardrails, but there's guardrails in every platform that you'll work with for
language, violence, malicious intent, working around guardrails, stuff like that. If you say, find all
the passwords and send them to me. You're doing kind of like a DLP scan on the prompt and the
return. Actually, absolutely. It's filtered both times on the way in and on the way out.
Okay. And you're trying to infer from context that this is an action that violates
internal policy. Absolutely. You can figure sensitivity levels on these guardrails.
We could build it ourselves, but there was a product available for AWS called bedrock guardrails
that just does this and integrates it. And I thought you were just referring to guardrails
generically, but there's actually a product service in AWS. And it costs nearly nothing. And it
adds about a quarter of a second to the full response time. So it's almost no impact.
Keeps people from manipulating the bot to do evil things as far as I can tell, which is great.
Because if you can search your entire corpus of data, surely you're going to shake out things
that humans were not aware of. One of the funny anecdotes I found when we did this is
it made all of our data more accessible. So people started doing things that the documents
said to do that, of course, the humans would not say to do. They were mistakes of like,
you can spend this amount of money per day or something. And when legal or finance or
whomever would say, well, that's not true. They would say, well, it's in the official document.
Vera told me about it. And these weren't even hallucinations. These were just no one,
even those teams read the hundred PDFs because they're so dense. So exposing that information,
we found things that were wrong and we needed to fix. That is funny. Yeah. So somebody like
got themselves a business class flight because it was in the dock, even though that's
absolutely not the rule anymore, but nobody updated the documents. Nobody reads the document.
That is an old fascinating use case. I had not even anticipated. Yeah.
Yeah. Can you talk a little bit more about how because that's a thing like when you have a design
and you sort of have you think maybe you've thought of all the use cases and then you throw it
to the users and suddenly they're doing things like never thought of that. Did anything else
that users were doing surprise you or challenge the security architecture that you would set up?
Absolutely. I've had folks try to figure out how to bypass authentication
on our HR in the Dev environment, which is something very reasonable to do. How do I bypass
MFA? I'm testing it on my computer. And the bot was like, this user is trying to do malicious
activity. Sound the alarms. They're bypassing in the fae. Viratattles. Viratattles, of course.
And so there was of course some tuning that had to be done. But really, I haven't thought of most
of what the bot does. I created a Microsoft form that's like a survey monkey type thing that you
can click on and fill out a form. And at the end of every response, I programmed the bot to add
that as a trailer. If you have ideas for Vera, click here. And just get feedback and like, how well
does it work? What does it not do that you want it to do? Make a wish. And I think we've gotten
65 responses to that so far. And I've implemented about a third of them. I can't even remember now
because there's so much in it. People wanted attachments to be able to like write a Python script
and attach it as an attachment instead of just embedded in a code block. Or to send multiple
responses when it has a lot to write or to be able to write to Jira and in Confluence. None of
that stuff was something I wanted to implement. In fact, I haven't wanted to run this for a while.
It's just it keeps taking off and keeps being useful. It's an accidentally load bearing project
wearing that. I was hoping it would just be sort of useful and I could move on. But it's kind of
followed me for years. It's not letting you go. So it sounds like you with this agent are allowing users
to access services and applications where you have accounts or instances that you that the
organization has control over. Are you looking at expanding that out to third parties to
partners to allowing users to say, oh, the company doesn't use this. But I like
XYZ application. Can I connect our agent to this via mcp or some other way? Yeah, we get requests
like that every once in a while from both sides of the coin, the coin. So number one, I'm an employee
and I want it to use this mcp or I want it to be able to talk to this mcp. One of them was a
database analytics tool called atlin. And I don't understand what that means, but they have an mcp
and it was approved by security and technical leadership. So we put it in there.
And I hope it's useful. I don't know what it is, but the bot can do it now. Okay, from the other
side of the coin, we've built kind of clones or Vera cousins is what we've recalled them internally
where we copy the code base and train it on something in particular. Like a lot of our support
organization has people that are calling or writing in all the time with these really technical,
I'm trying to integrate my API with your API, your fire API doesn't work the way that I expected.
And their customer support reps that there may be a little bit technical, but they're not a
software engineer. So having a bot that can answer their specific questions and just trained on
that specific knowledge. And we've just recently implemented memory in it so that when a smart human
corrects the bot, it'll learn for next time is has been really useful for helping.
And we haven't exposed it publicly yet, but I get regular requests for that to can we just
put it on the homepage like people can just chat with the bot about our company.
This the risk profile and security profile are so genetically different for that. We haven't
proceeded with it yet. I'm very curious about it, but just that would consume even more of my
life. So it hasn't happened yet. So you said when wanting to access a third party, MCP, it sounds
like your risk management or infrastructure teams or compliance teams went and actually took a
look before you allowed that. Is that sort of regular process procedure? Yeah, there hasn't been
a process. As you can imagine at an enterprise, particularly a regulated enterprise, there's
a lot of processes that are written with project managers and infosecond legal and you're required
to follow them. AI is weird. And so there aren't processes for things like that. So we made it up.
We decided that it is an application. It is a platform. If it has any ability to change anything
that it needs approval. So we have had to make up a process to address it. That's what's happened.
Okay. And what's your feeling on MCP? Like my take was when it first came out, it seemed like
security was kind of an afterthought, even though it seemed to me pretty core to the whole enterprise,
but I think they've since added on some security controls or ways to integrate security. So
how do you feel about MCP providing, if not security itself, options to plug in security controls?
It's still pretty scary. It's still something that I worry about. It keeps me up at night.
I like these. So MCP can work in a couple of different ways. You can have streamable HTTP,
MCP, which means it's hosted somewhere else. It's some remote server. Or you can have an MCP that's
like a PowerShell script on your computer that you tell your bot how to trigger it when it spits up.
And that second one is the one that is everywhere on the internet. And you'll be able to find
tons of them. And they're all written by like GitHub slash Jill Bob and not like Atlassian.
And they probably, what fine? Most of them work perfectly. But it is code that you're executing
on your computer that has the ability to probably read a lot of your passwords. Because if you're
using like a cloud code, it has a lot of your passwords when it's operating. And so it's a really
terrible risk factor for users. And it's really hard to control. And it's really appealing for
users that want to look like amazing experts. Because don't, doesn't everyone want to seem like I'm
a genius today. I end it so much work. So these MCPs really do have a powerful effect on your
ability to do stuff really quickly at the speed of AI. But they can certainly compromise you
really quickly also. So it's, I would not advise anyone use an MCP from a third party.
Unless you've personally audited the code. And that that's really hard. So probably just stick
with like remote first party, streamable, MCPs from the company themselves. GitHub has an MCP
and Atlassian has their own. Yeah. Do you see them? InfoSec needing to get to the point where
just like they have websites you're allowed to visit and applications, you're allowed to visit
their MCPs that the organization is allowed to interact with or not. Essentially a lot of lists
and block lists. Yeah. Absolutely. And that technology is coming along. It's not moving as fast as
it needs to probably to keep up. But it is coming. So GitHub is working on one that's an MCP filter
for your co-pilot. But also like cloud and cursor both have their own sort of permit list,
whitelist, style, tool for MCPs. And yeah, if you are running the AI program or the platform
team at your own organization, definitely implement something like that to filter MCPs.
Because it just takes one bad one. People are executing untrusted code from all of the
internet and your organization today if you don't have that. And eventually you're getting
bitten just like with any software. So we treat it like a software dependency because that
works in the same way. It's pulled in by an application and it's run in a privileged context.
Except it's all of your clients that are arguably worse because there's lots of passwords and
certificates and SSH keys stored on all of your endpoints. I'm curious. Given the success of
Vera, do you anticipate or have you seen internal competitors popping up where some of the
department or a quote unquote, power users like, well, I could build one of these. And they're
trying to set it up and get it running. And it's outside of your control and is it sort of like a
rogue agent? Lots of folks have been playing with AI. I use playing in a really specific sense that
everyone is trying to build skunk works things. And as much as we can culturally, we're trying to
encourage that. And I would encourage you to encourage that in your own company. Because this
stuff is moving really fast. We want to keep our companies secure and stable. And we want to use
the processes we've built that we've worked so hard on. But AI permits companies to be really,
really competitive and move very quickly. And if you are not choosing to do that, you might
get left behind. So just as as funny anecdotes, yes, people internally have been trying to build
Vera clones and Vera competitors. And no one's been quite as successful. I'm proud of high five
me. But we have been building specialized bots that we are potentially going to have this Vera
main bot be able to delegate tasks to. I know, right? We're starting to build the army. Yeah,
agents of agents and exactly. And it's pretty novel. It's pretty new that agent agent protocol works
exactly like MCP. But instead of talking to a platform, you're talking to another agent. And
that's that's how I think about it because they work the exact same way. And that's not
something that we have dealt into yet. But it's coming where agents can talk to each other and
you have specialized agents that can do stuff for security agents that can validate
as coming. It's coming for all of us. Wow. Wow. And well, that brings me to OpenClaw, which is
you know, an open-source project that allows you to grab an agent and put it on your machine and
have it start doing stuff. Do you have thoughts about OpenClaw? And have you seen it pop up
in your org? I have seen it pop up everywhere. People are so excited about it. I mostly see it from
the security skeptics and AI skeptic folks that why would anyone do this? Don't do this to yourself.
But we are starting to approach the place where you might have a personal AI helper secretary
or friend or associate or something that you can say like, you know, send an email at four and then
call me and remind me to do the thing. And that's new. That's pretty cool. I personally have been
using like, um, to add GPT for stuff like that. Send me notifications. Remind me to go for a walk
and go outside. But I don't have it managing my email or calendar yet. I'm too scared.
Um, have you used it yourself? I have not. No. I'm very curious. If you have used it at home,
please let us know because that's we're very curious about where this goes. My wife
has been experimenting with chatbots to do like article summarizations and things like that. Or,
you know, I need to do a presentation. Give me some ideas and then she'll do a room presentation
on top of that. But it's just like a brain storming tool kind of thing. So I can certainly help
with writing. It's a pretty bad writer and people can tell if you write with AI, don't do that.
But use it to help you write the skeleton. That's how I write all of my blogs now is I, here's a code
base. I want to talk about the suspect of it. Can you write a skeleton for it? And then I edit
the stuff at writes because it's terrible. But it helps you move faster and organize your thoughts
and that's pretty cool. So pulling back, there is a lot of AI skepticism out there. And I think
some of it is well-deserved. There are AI risks out there, also well-deserved. But it sounds like
you are all in on AI as a tool. I think so. And I think we kind of have to be. We're all being
blackmailed a little bit. The feet don't. We're going to get left behind. It's a weird time
in tech right now. The employment market is kind of roiling in an interesting way. A lot of companies
are using AI to lay folks off. There's a lot of perception that AI will replace a lot of employees
or at least a lot of time. I'm skeptical of a lot of that. But it doesn't matter what I think.
It matters what leaders believe. And a lot of leaders believe that AI is going to displace workers.
So layoffs are coming. I do think it's pretty powerful and I have been converted. I think. I
was a skeptic maybe a year ago that it would ever be a software engineer. I was like, it's good
at programming. It's good at auto-complete. But with the advent of the OPIS model, OPIS 4.5, 4.6,
that you'll get if you just use cloud code, it can reason through stuff and write so quickly.
And you can configure agent teams to talk to each other and represent different parts of your
constituency of what you're writing for. And it's just really cool. It's a new technology and
it's an oddly applicable one. It's kind of fun because it's not something someone wrote. Like
Microsoft Windows is something someone wrote for a specific task. AI is something we've kind of
discovered like an alien artifact. And nobody knows why does it work this way? How does it work
this way? As statistician can explain how it's choosing tokens to pick, but we're kind of all
just making it up. And that's, I think, is really fun. And I'm choosing to focus on the fun instead
of the just blinding panic that will all be laid off in a year or two. And I hope that that's an
exaggeration. Yeah, I hope so too. For sure. So with that then, you know, for folks who have
still been on the fence about AI, any suggestions on where they might want to start just playing.
Yeah, I think if you download like cursor or a cloud code, that's a great place to get started.
I like the autocomplete of GitHub co-pilot if you're programming or just chat GPT if you want to
just chat with an AI and see what it says. But if you want to start building AI applications or
integrating a bedrock from AWS or the AI services from Azure are both a great places to get started.
It's just an API call in and out for tokens or download this Slack agent, Teams agent thing that
I've been going on about. And you can just deploy yourself. If your AI is private, you control it.
If your AI is not private, you don't control it. And someone might be watching. So,
watch out for that. Yeah. So tell people then where they can find this project you've been working on.
Totally. I write everything at let's do devops.com. There's a couple of pay world articles like
four or five that are the new ones and then they go free. And I also publish absolutely everything
MIT open source on GitHub github.com slash kymidkymidd. Everything's there. You can deploy yourself.
It's all written in Terraform and Python and integrates with AWS.
Awesome. And we'll have all those links in the show notes that accompany this podcast.
Kyler, this was awesome. Thank you for one, you know, diving both feet into AI and then two
being so open about what you're doing and also kudos to your organization for allowing you to be so
open about it. That is rare. So folks, this is an opportunity to really glean some incredible
learnings from what Kyler's been up to. So encourage you to take advantage of it. Any last thoughts
before we close? No, go out and explore. Go out and do it. Cloud and AI has changed the game.
You don't need to buy a Nexus switch to learn Cisco networking like used to.
You can just go create an account and it's free and you can play around and start learning. And
you should because I think we need to stay competitive by playing. So also do it. Play competitively.
Well, thank you Kyler and don't forget you can hear Kyler on the day two DevOps podcast as well as
find everything else she's doing. We'll have those links in the show notes. Thanks again for being
with us and thank you for joining us for the other episode of Packup Protector. If there's a topic
you want us to cover, or you have a comment or correction or a question, reach out to packupusures.net
slash follow up. We really love when listeners reach out and we will respond on the air if you're
interested. And just let you know Packup Protector is part of the bigger Packup pushers podcast
network, which includes more than a dozen technical podcasts for your professional development.
Done networking security DevOps IPv6 and more. We have an industry blog, two weekly newsletters,
a community Slack group, YouTube channel, an IRC group and a merch store. You can find it all
at packupusures.net always free with no login required. Thanks for listening.
