Loading...
Loading...

OK, not LITERAL goblins, but it’s an interesting gremlin that helps us understand model training. Plus, Sony says don’t worry about the PlayStation DRM, but doesn’t give details, therefore skipping the be happy part.
Starring Tom Merritt and Huyen Tue Dao
Show notes found here.
Hosted on Acast. See acast.com/privacy for more information.
Of course, I say that right after you've already pressed it, give me one second.
I didn't put this in my prompter.
This is the Daily Tech News for Thursday, April 30th, we're all out of April 2026.
We tell you what you need to know, give you the important context and help each other
understand.
Today, why goblins invested OpenAI's Codex?
How it got rid of them and what it means for understanding model training.
I'm Tom Merritt.
I went to it now.
Let's start with goblins.
So last week, OpenAI released source code on GitHub for the Codex command line interface
or CLI. This revealed that the system prompt for the CLI. System prompt is something that rains
in its behaviors, no matter what you type in.
So it included instructions.
And I quote, never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals
or creatures, unless it is absolutely and unambiguously relevant to the user's query.
That prohibition is repeated twice in the 5,000 words of the system prompt.
Other elements of the system prompt are probably more understandable, like never use destructive
commands, like get reset hard or get checkout unless the user is clearly asked for that operation.
That's a good one to have in there.
Also, I think a lot of people will like that there is a system prompt that says not to use
emojis or m-dashes unless explicitly instructed.
I think we all like that one too.
But up until now, I don't think there's been a whole lot of discussion.
There has been some, but not mainstream discussion of chatbot goblin talk or raccoons, certainly
not as much as there has been about m-dashes.
And the instruction was not present in earlier models.
Well, open AI in a fit of transparency posted on its blog that the pension for mentioning
goblins and gremlins showed up starting in GPT 5.1.
And the company did a lot to kind of track down why this was happening.
It believes the behavior was a side effect of developing the nerdy personality.
For a while, you could choose to let chat GPT respond to you with a nerdy personality.
The reward system was scoring outputs with goblins and gremlins much higher.
They were like, oh, yeah, that's very nerdy.
And so that encouraged the chatbot to use those more often.
And interestingly, the way the reward system and the reinforcement learning system works,
it then spread outside of the nerdy personality.
Open AI explained the feedback loop this way.
So that playful style is rewarded.
Some rewarded examples contain a distinctive lexical tick like goblins.
That tick appears more often in rollouts.
The model generated rollouts are used for supervised fine tuning.
That's how it leaks from the nerdy personality because it shows up in stuff that is then
used for fine tuning training.
And the model gets even more comfortable producing the tick.
Now that nerdy personality was retired in March along with the introduction of GPT 5.4.
And at that point, goblin reward signals were filtered out of the training data.
They're like, we don't need to be rewarding the use of the word goblin or gremlin or any
of that stuff.
However, GPT 5.5 had already started training before that.
So some of the goblin behavior creeped in to 5.5.
And they decided they needed to filter out.
Now once this was discovered and went a little mainstream and people started talking about
it, humans being humans have now created plugins, forks and skills that specifically remove
or override the prohibition against goblin talk.
There's even some that encourage it.
And open AI has embraced that as well.
Nick Pash said they might add a goblin mode toggle to codex if enough people want it.
And they did, in their post explaining how this all happened, give you some cut and
paste CLI text if you want to remove the instructions prohibiting the goblin talk.
Hmm.
Goblin talk.
Welcome to goblin talk.
I, there's a lot of stuff here.
So number one, of course, nerdy style was rewarded previously as well as playful style.
I do think this is really interesting.
I kind of, I've been thinking about this stuff a lot and it's, it's so interesting because
of the way that, you know, models work and with, it was training and things like that,
that a lot of these fixes tend to feel very reactive, right?
So the M-dash thing I think is really fascinating because I was thinking that because usually
these days people talk about, well, you can really tell when AI's written something because
it has an M-dash in it.
Normal people don't, I mean, I, I don't know, Tom, I suppose you're someone who might
have used an M-dash in your life.
I've caught myself removing M-dashes from my writing today.
Reese, okay, today, okay, wait, do you, so are you like chat GBT or other bots do you
like M-dashes and, and, and, and overuse them or you just, no, no, I don't overuse them,
but they were definitely like in this discussion that we just had when I was explaining what
a system prompt was, yeah, I had originally put system prompt for the CLI M-dash, the part
that reigns in its behavior is no matter what you type in M-dash, and I actually changed
it to commas.
Oh, my goodness.
Okay.
We'll find that too.
It's just, it's just so interesting because after hearing that, I was like, what's
some point?
They're just going to put in some kind of command to not do that.
Yeah.
It's so interesting because again, I am not, and I'm, I'm not a AI person and I'm not,
especially in an LM's and I don't really understand very well how training works.
So I might be being unfair, but typically when it comes to software engineering, doing
things in this way where you kind of hard code slash react to things very specifically is,
I would like to say broadly is generally not the best, most stable, most kind of, kind
of band-aid.
Yeah, yeah, it's a band-aid.
And so maybe that's unfair of me to say was a training model, but traditionally I think
it's software engineering.
Band-aids are not great.
You can do band-aids if needed, but generally the band-aids need to kind of get replaced
with stitches or, you know, a skin graft or something at some point.
So it's so fascinating to me that it seems that with the way that LM's work, maybe naturally
all you can really do is band-aid or over time retrain them.
Maybe that's where that comes in.
And so it just makes me a little nervous and it does make these kind of things more reactive,
just like maybe less fun stories about, you know, less fun things that maybe Grock has
done and things like that.
Well, yeah.
Exactly.
Grock's controversy was caused, they said by somebody putting something in the system
prompt that they weren't supposed to as a prank.
As a prank, yeah.
And so you can make it not do stuff or you can make it do stuff.
You can make it come up with stuff.
That system prompt is basically just a prompt, right?
It's just a prompt you don't see because it's setting the boundaries for it.
If it makes you feel a little better, I'm not an expert on this either, but from what
I do understand, what they do with system prompts is sort of a temporary fix.
It is a band-aid.
It's it.
And you saw that in the way that they described this, which was 5.4, sorry, 5.4, yeah,
5.4 no longer had this problem.
We've been able to filter it out with the training, but 5.5 had been trained previously
to this.
We put the band-aid on 5.5 until we could get back to having it have the same natural
defense against it that 5.4 had.
So it really is a band-aid in the truest sense of like, yeah, we put that on here there
until we can do proper treatment, right?
It gets you to that point.
And the other thing to remember is these are just predictive models of what character
should come next.
So what you're training is weights.
What is the weight, you know, what is the probability that this should come next versus
that?
Yeah.
And so these filters are like, oh, you can't program.
I think that's where you and I have the same reaction of like, that's not a good solution.
It's like, you can't program it to do something.
This isn't that solid.
You have to, you have to kind of throw things at the weights and then see how it reacts.
Sometimes most of the time it reacts the way you expect, but not always.
Everyone starts talking about goblins and gremlins and raccoons, in which case you have
to go figure out, okay, how do we train that out of it?
In the meantime, let's put a system prompt in to stop it from doing that behavior that
we don't want it to do anymore.
And I think that's so interesting.
You kind of touched on something else I was thinking talking about because these are
LLMs are basically patterned mattress more or less that what is interesting and what
someone told me a couple weeks ago is that actually it has been known that LLMs often
are not good at understanding negation, like literally putting no or not in the sentence.
And there's a different reasons for that.
I literally just like, you know, kind of Googled a few things.
But as I understand the literature is out there that it is a known problem among kind
of like machine learning and like AI said, they're just certain like, yes, that's why
sometimes I think I've, I've definitely written prompts where all of a sudden, it feels
like it's thought I said, yes, when I, when I actually said no.
And I think it has to do with the pattern matching and probably, probably, probabilistic, you
know, nature of LLMs.
And just like to reinforce, it doesn't, again, it doesn't necessarily understand what's
happening in a sense because it is a pattern matcher.
It's just very, very good at what it's doing and it's very, very good at sounding like
it does.
I still treat it, I still answer more, more, more, more, more, more, I still answer
for morphize it on a daily basis, even though I, I know it really shouldn't.
So I guess to say that point also is like, the best thing that we can do is tell it to
not do things.
It's not really great at understanding negation.
So I guess to your point, as what you said, yeah, the fact that, you know, LLMs are
non-deterministic and that, like, given an input, you cannot, I guess my bad explanation
of that is that given an input, you could have a different output depending on nothing.
It's not deterministic, like given the exact same conditions and exact same inputs, you
could get a totally different response.
And that's why, yeah.
To a, to a coder is like, oh my god, no, I can't, when I first heard that, like LLMs
are non-deterministic, I'm like, excuse me.
Yeah.
What?
What?
What to be careful with a lot of them?
You have to be comfortable with a lot of uncertainty.
Yeah.
And not to say that, I mean, not to say that software engineers are perfect at all, nor, neither
are we deterministic beings, really.
Sure.
But that, I will say, like, yeah, I understand that this is how it works.
I think that there's a lot of benefits.
And I mean, I've, I've been studying AI actually since college of different flavors of it.
And so I understand that there's a form of impreciseness, or like, there's a lot of
like extra stuff going on a lot of times when you have these systems and AI and the way
that machine learning works, and I'm probably not, it is not distinct totally from the way
our brains work.
But that as someone who writes software, it makes me nervous, because, yeah.
One of my metaphors for it is writing a bike.
When you first get on to write a bike, you want to just sit on the bike and direct it,
right?
And you can't, you can't do that.
You have to, you have to shift your weight.
You have to balance.
And that is a weird thing to learn, because you're guiding the bike not in the straight ahead
logical way.
Yeah.
Yeah.
Right?
It's a more of an intuitive thing.
And I feel like LLM's in a way are kind of like that.
Yeah.
You have to shift your weight, right?
Yeah.
It's not a thing where you can just put in a command, and it will always act the same way
to that command.
I actually studied more evolutionary algorithms in school.
And that's very similar, even less interesting than that, because you give it basically some
kind of scoring metric.
And all of these little programs basically exchange code randomly or not so randomly, they
like create baby, baby programs from like splices of theirs, and they just rate themselves
against this metric.
And even by that system, yeah, like that would be like writing a bike by trying to use
your hands on the pedal.
You know, like me.
Yeah.
The less interest is even more crazy than your example at the same time, though, I think
what's interesting.
Yeah.
I think at the end, what's interesting is metrics, right?
And I think that's one thing that I think we're struggling with with LLM's.
And of course, like, which is I think is interesting.
It has an interesting appearance in this article where, okay, they had a certain thing they
wanted to achieve, right?
I totally get this.
They wanted open AI.
They want to actually be T.B.
playful.
They want to have fun with it.
Yeah.
They wanted it to seem like something that you would trust to code.
For example, maybe making it nerdy or just they want to make it appealing probably for,
you know, UX reasons, but also money reasons because they want people to spend more time
on them.
That's totally fair.
But at the same time, how do we measure this is always like the biggest, the hardest
thing, even in just regular software engineering?
How do we measure that people like our thing?
How do we measure that people are engaged?
How do we measure the nerdiness of our models?
And like, you know, right now, in my industry, we're like, how do we measure productivity
from AI?
I don't think it's so interesting.
There's all these different things in the story.
I actually, I'm very charmed by the story, to be honest.
I don't find it that alarming.
But it is like a very friendly way to digest a lot of different interesting things that
are going on with AI.
Yeah.
I agree.
I think this is really a good instructive story.
If you want to understand how these things work to kind of dig in and say, oh, okay, I didn't
realize that that was the mechanics underneath it.
And one of the reasons they're trying to give it personality and they put system prompts
in there that say you need to be warm is if you remember a few years ago, these bots
could get insulting and angry and hostile.
And so I think they're trying to over-correct to keep that from ever being the case and sometimes
it makes it sick of fantic and sometimes it ends up with goblins and gremlins and records.
Goblins and gremlins and ogres and trolls, oh, my, what a, what a, what a, I'm very grateful
for this story.
And by, on that note, I'm also very grateful for you guys.
The listeners.
Detoness is made possible by you.
So let's take time to thank Jeffrey Zilx, Aloe Adam L, Philip Les and Sunbon.
Yay.
Thank you.
Every single one of you.
There's more we need to know today.
Let's get to the briefs.
All right.
Well, let's give you some hardware chase after all those AI goblins.
Motorola had a razor family announcement on Wednesday.
So Motorola announced pre-orders will start for the Motorola razor fold on May 14th for
1,899 USD shipping May 21st.
The new Motorola razor family goes on sale May 14th starting at $800 for the base model
and $1,100 for the plus and $1,500 for the razor, razor ultra.
The Moto G887 mid-range phone gets a 200 megapixel camera coming to Europe soon for 399
Euros.
Motorola also announced the Moto Buds 2 Plus true wireless stereo or TWS earphones.
These buds were announced at MWC in March.
The Moto Buds, sorry, the Moto Buds 2 Plus are available in the US now for $149.
They go on sale in India May 8th at 12 p.m. for $5,909 rupees, which is about $65.
You can also get the brilliant edition of the buds with 12.
I can never say this one.
Swarovski?
That's it.
You nailed it.
Swarovski crystals, encrested on each earbud and an undeniably very sexy.
More eye-catching, pantone, violet, indigo, paint job.
The sexy part was my editorializing because I'm a purple girl.
There's a similarly crystal-increased version of the Motorola signature phone.
They're available worldwide with the exception of North America.
Yeah, I'm very curious why North America isn't getting the Swarovski crystal.
Well, encrested Motorola devices, but okay, yeah, things are weird right now.
Must be a tariff on Swarovski.
They pay by the difficulty to pronounce.
Yeah, great family line.
In fact, I think the G87 might be the coolest one up there, just because you get, if you're
into cameras, which I'm not, but I know so many people are, you get a really good camera.
The front camera got an upgrade, too, for $399 euros, not bad.
Motorola is always hitting that middle range price point pretty well.
Also, they have some of the most fun, interesting colorways.
Yeah.
It's even like, okay, the Motorola signature phone, it's extra, but it's got this, it's not
only pantone violet indigo, but it's got this quilted pattern with the Swarovski crystal.
It's like a pillow for your crystals to lay upon.
It's so extra, but at the same time, I feel like, especially with smartphones, the specs
on smartphones, minus foldable, they're kind of just plots, logarithmically going up.
Anything like Swarovski crystals and a brilliant purple to distinguish things is a win in my
book.
Yeah, I think Motorola just, you know, is solidifying its place as the RC Cola, right?
Coconut Pepsi can go have their day and battle each other and do taste tests, but if you
want a good, reliable, affordable phone, you know, get Motorola.
If you don't get the RC Cola reference, just go look it up on Google, it's it's third
place.
Whatever your favorite third place thing is, that's what we're talking about.
It was a huge day of tech company earnings on Wednesday as five of the biggest tech
companies announced double digit revenue gains on the year and increased capital expenditures.
They're making more money so they can spend more money.
Obviously, not everybody wants to hear all the numbers.
You can go looking up if you're interested in a particular company, but these are the
numbers worth paying attention to as a consumer.
As somebody who's like, I just want to know if these companies are still going to be around,
if they're going to be cutting services on the money making side, alphabet revenue up
22% and cloud, which is important for those of you who use Google Docs and stuff up 63%.
So alphabet booming, Gemini paid subscriptions grew 40% on the quarter.
So that is definitely a growing business that Google will continue to pay attention to.
Microsoft revenue up 18%, not bad, cloud up 40%, very good for them, AI business up 123%.
I don't think a lot of people saw that company, but gamers beware, Xbox hardware fell 33%
and game services fell 5% so they do have some work to do there meta revenue.
Up 33% this is going to get lost in the headlines because reality labs lost 4.03 billion dollars.
They're still spending a lot on reality labs intentionally with the belief that it will
pay off someday.
They're treating it like a startup, but revenue is up for meta.
However, this is concerning daily active people propped.
That means they have fewer people to sell advertising to.
They still have 3.56 billion, so they'll be they're not going away tomorrow, but something
to keep an eye on.
Business on revenue was up 17% and Amazon Web Services, AWS up 28%, that's its fastest
in 15 quarters.
So the overall story here is if you're doing cloud, you're doing very well.
And if you're someone is like, is there really demand for these cloud services?
Yes.
Is there really demand for these AI services?
Yes.
That was true for Microsoft and it was true for alphabet as well.
In a little bit of a different situation, revenue up 69% and that's entirely, well,
not entirely, but that is largely due to RAM.
They are selling RAM at the highest level and the highest price that they have in a long
time.
All right.
So what are they going to do with all that money?
Let's talk about the spending side.
Amazon, Meta, Microsoft and alphabet expect to spend 77% more in capital expenditures than
the record 410 billion combined those companies spent last year.
Alphabet raised their expenditures to 190 billion, Amazon plans to spend 200 billion.
Meta raised its projected spending from 115 to 135 to 125 to 145 billion and Microsoft
is on track to spend 190 billion in CapEx this year.
Side note, Europe software companies, SAP, CapGemini, Dissult Systems, Beat Analyst Estimates
with positive earnings as well.
So that's a good upbeat for software, especially in Europe.
And what about Samsung or for that matter, SK Heinix and micron?
Samsung said demand would continue to outstrip supply into 2027.
That's not good news for us, but it's good news for their bottom line.
However, none of these companies are talking about CapEx spending to build capacity.
So the cloud folks are spending a lot of money to build data centers because they've got
a lot of money going in and they're saying the only thing that kept our numbers from
being even bigger was capacity.
A couple of other factors to keep track of, the spending done now will spread out over
years and that will impact profitability for years to come.
So you spend 190 million on CapEx.
That doesn't come out of your bottom line this year.
You spread it out with depreciation over several years.
So the big revenues now are a bit of a bulwark against the fact that these costs are going
to continue to hit their bottom line for a decade or so.
Wow.
Yeah, I mean, it's just as just a cog in the wheel or wait, a cog in the system, a cog
in the wheel, a cog in the wheel, a cog in the wheel, a cog in the wheel, these things
always kind of like, I never quite understand how these things work and how like different
core.
I just don't understand this strategy sometimes, but it is to know.
Would you like me to try to explain it?
I'm certainly not a business professional, but the strategy is we are getting so much
demand for our cloud services, look at our revenue.
If we build more capacity, we could sell more of them.
So we're going to spend a lot of money with the idea that we will make even more money.
Okay.
Is there any like, I don't know, is there a point where, or is there any like legislation
that, I don't know, key, it sounds like, it sounds like a gamble, which I guess is that's
every business.
Perfect.
I mean, that's fair to perfectly find a reason.
A small business takes a small business loan is exactly the same principle.
Okay.
You're saying, I don't have the money now, but I'm going to spend it because I'm going
to get the money.
Now, in these cases, they have the money now because they have big piles of cash that
they can use.
But they're saying, I'm going to spend it because I'm going to make it.
And I think that's the thing to pull out of these earnings reports is if you're like,
yeah, but are they really making it or is it really a gamble?
They're making it.
They're making the money now.
And they know that like we're maxed out, we could sell more.
Let's make more.
The interesting thing is the, the hardware makers not doing that because they are worried
that if they build the capacity, the demand will eventually stabilize.
And then they'll be left with too much capacity, which is going to keep your RAM price higher
because they're not gambling because they're not spending that money ahead of time and
just raking in the profits.
That's interesting.
I mean, that is why hardware is hard because it's, you can't scale it up and down in the
same way you can with cloud and cloud and SaaS, fascinating.
All right.
Well, Sony issued a statement in response to discoveries that a 30-day internet connection
, a 30-day internet connection clock was showing up for PlayStation games.
Sony wrote, players can continue to access and play their purchase games as usual.
A one-time online clock check is required after purchase to confirm the game's license
after which no further check-ins are needed.
Sony did not address the issue of a console with a dead CMOS battery, which would be unable
to verify the date and time.
Sony did not address the 15-day window that was also theorized to mean you had to do an
online check-in after 15 days in order to get an infinite renewal of the license.
Yeah.
This was a unsatisfactory explanation.
Sony did confirm that it exists and it confirmed that it's not meant to be an ongoing
every 30 days thing, which is very good.
But yeah, what if you got that dead battery?
That's a problem.
And there were several people that noted that the first check didn't come until 15 days
in.
So what they're saying is either that that's not true or they're not pointing out that,
yes, you have to do a one-time online check-in 15 days after purchase.
And it would be good to know that.
Most people won't run into this because they're connected to the internet every time they
log in on their PlayStation.
But yeah, for that's that's that's Seabomb issue.
It would be nice to have more details on that, but at least they responded.
Yeah.
Well, if you would like to respond to us, you can do so, all kinds of ways, including
social networks.
We are at DTNS Show on X, Instagram, Threads, Blue Sky, and Mastodon and for TikTok and
YouTube, spell it out the whole way.
You can find us at Daily Techno Show.
All right, let's do some quick headlines that are going to make you go like, yeah, I
knew that.
I heard it on DTNS.
All right.
Well, thanks to Moteng who noted the story on the subreddit, China's Stilato S9 Sedan
developed by BAIC MotorCorp.
And Huawei can project a two megapixel image from its headlights, letting you show a movie
on an outdoor screen or a wall.
You can also project navigation arrows on the ground.
That's not going to be a problem if everybody got that where everybody's arrows are in the
intersection.
That's pretty cool.
Actually, I think that's pretty cool.
Tokyo's less busy Haneda airport pro tip flying to Haneda, if you can, it's closer.
We'll start testing humanoid robot baggage handlers in May pro tip flight, Haneda, you get
to see the humanoid robot baggage handlers.
Now you won't see them on the ground right away.
We're going to start with simulations to determine where they can best be used and
how to actually implement them, but they will eventually join humans on the ground.
And they say they possibly might have Minkabans clean and airplanes.
Haneda also has a lot of great restaurants, so like a dendum to the pro tips, so I guess
flat out with three pro tips.
Three pro tips.
All right.
Well, in Japanese aviation, Japan's Air Kamui has developed a cardboard drone, the Air
Kamui 150, for the Ministry of Defense.
The idea was that they would be disposable, though still $2,000 to make, I guess, the actual
reusable drones that fly back to you are even more expensive, but yeah, but these will
just biodegrade.
I guess it's not bad for $2,000 I presume, I mean, it's still a killing machine, but, you
know, better than a biodegradable, huh?
Thanks to techno mench who noted on the subreddit that the ever popular no pad plus plus code
editor has arrived as a native mac OS app.
It's even a universal binary.
Oh, you got an old Intel machine.
Amazing.
20 years in the making.
I love this.
I still love that we're excited about no pad plus plus.
I know.
Me too.
All right.
Well, even reality's smart glasses announced terminal mode, which projects an icon in your
field of view to let you know what you're coding agent is doing, which is very appealing
to me, but also man, the brain fry is going to be so real, you'll never, you'll never
turn off.
YouTube announced that picture and picture mode is now rolling out globally to everybody.
Guys, no longer reserved for the premium paying users, and it's not us only anymore.
You can get it everywhere.
Adventure.
Well, open that AI has said it has signed contracts for 10 gigawatts of compute capacity.
A goal it originally hoped to meet by 2029 and speaking of signing up all that capacity.
Open AI also rolling out a security focused version of GPT 5.5 called appropriately enough
GPT 5.5 cyber that is very much like Anthropics mythos and may only be used by approved users.
Now I have been of the opinion that maybe Anthropic was limiting mythos partly because of computing
capacity.
So I wonder if that is also partly what's going on here.
But they do they do have 10 gigawatts on the way.
So that won't last for honestly, the more that I've seen stories come out about open AI
and stuff like that and also just, okay, and that makes actually a lot of sense.
The compute capacity is the main issue.
Who knows?
All right.
Well, we end every episode.
You can ask us some shared perspective.
Today Howard has some further thoughts on the cursor agent that deleted a production
environment for a rental car company.
Yeah.
Did you hear the story earlier the week?
Oh, you know, I did.
Yeah, I did.
I said stuff about it to somebody.
Well, Howard wrote on Monday's episode, the discussion about whether cursor or rail
way, which was the platform provider could be at fault, made me want to share the principle
of least privilege.
The general idea is that users, systems or processes agents in this case are granted only
the minimum access to systems, permissions or data needed to do their job or task.
The goal is to reduce the scope of a security breach.
But I think in this situation, the agent may have had broad permission and access as it
was acting on behalf of the human.
I have feelings, soap boxes and rants, but I would encourage people to consider least
privilege when granting access to AI agents rather than simply letting them act on your
behalf.
And he's got a link to a good article of Palo Alto Networks about that principle if you want
to get that out of our show notes.
Yeah.
Howard, it's kind of funny because last week, I finally kind of copped a downloading
flood code and letting it help me do certain tasks, like doing prep for podcasts.
And I also study Japanese, so I have a lot of random vocabulary words that I want to
get into, like, Anki, which is like a flashcard deck builder, and I want to do all these
fancy things with it.
And at some point, it started asking me for like, hey, can I access this?
Can I control your browser?
Can I do this?
And I'm like, oh, God, I know I shouldn't.
And I have to say as someone who knows that, and I've heard, I hope you may have proud
Howard.
I have heard of like the principle of this privilege, and I do understand it.
And I do see that it's very valuable.
And even as someone who has been aware and definitely works at places where, you know,
it's a pretty important principle to live by, I still was like, well, maybe it's fine
this time.
Maybe it's okay because we're humans, right?
I do the same thing sometimes where I'm like, I know I probably shouldn't do this, but
I'm sure it'll be fine.
I'm sure it'll be fine as the bane of human existence.
Yeah.
And I guess for me, I'm like, well, it's just my personal stuff.
It should be fine.
I mean, it's actually the opposite that I should be thinking or at least it should be
my own personal security should be as important as my job experience.
And it might be fine, but it's that, like, I don't want to go to the trouble of making
sure it's fine temptation that can get you.
And also, oh my gosh, the cloud already did all these things for me.
Let it just finish the thing.
Let it just finish the thing.
Yeah, yeah, yeah.
Yeah, there's that pressure too.
So I think it's a really good reminder that it is easy, but that we still need to put
our noodle on it.
All right.
Just thinking about you have some tips, principles of tips and tips and principles that you want
to share with us.
We'd love to hear them.
Feed back at dailytechnewshow.com.
Big thanks to Howard for contributing to today's show.
Thank you for being along for the daily tech news show.
Everybody out there listening and watching.
You keep us in business.
So if you haven't already, and I know some of you have this week, so thank you for that.
Become a patron right now, patreon.com slash D-D-N-S.
The D-T-N-S family of podcasts helping each other understand diamond club hopes you
have enjoyed this program.
