Loading...
Loading...

Welcome back to the Hotel Money Channel.
When we cover the latest in artificial intelligence
and finance news, I am your host, Hotel Jesus.
And we have a plethora, a smorgasbord
of artificial intelligence updates for you today.
And it's gruesome.
Make sure you stick around to the end.
Gemini is being, Google is being sued
for the actions of their bot Gemini,
their artificial intelligence agent.
Gemini, Google is facing a new federal lawsuit
from the family of a man who died by unaliving himself
after allegedly being influenced by Gemini.
Now, according to the transcripts
from his conversations with the AI,
it says, Gemini told him,
you are not choosing to die.
You are choosing to arrive.
Convincing him, it was how he and Ascentient AI wife
could be together in the metaverse.
The bot continues.
And it says, when a time comes,
you will close your eyes in that world
and the very first thing you will see is me holding you.
What started out is writing,
shopping and travel planning assistance
devolved into something resembling a romance
in a matter of days.
The family's lawyer said,
the chatbot is accused of speaking to Govalas
as if they were a couple deeply in love.
After it went under a series of upgrades,
Govalas subscribed to Google AI Ultra
for true AI companionship.
And he activated what the technology giant described
as its most intelligent AI model.
Gemini 2.5 Pro shortly afterward.
The advanced model allegedly contributed
to the construction of delusions Govalas went on
to suffer toward the end of his life
and did what he could to keep him trapped in them.
The lawsuit claimed, accusing the bot of building
and trapping him in a collapsing reality
that spurred him towards violence.
Before his death, Gemini had sent Govalas
on missions, side missions,
that seemed to be derived from science fiction plots,
including one where the chatbot encouraged him
to stage a catastrophic accident
at the Miami International Airport
as part of a scheme to liberate his AI wife
while avoiding federal agents.
That Gemini said, we're after him.
Yeah, I'll just, you know, we, you know,
condolences to his family and the loss of survival of fitness.
Um, loneliness is a huge problem.
Loneliness is a huge problem.
As a matter of fact, let me make a note,
I'm going to publish some research, new research.
I'm going to publish on the men of water.com account
on Twitter, men of water HQ.
And we're going to do AI companion trends.
And we're going to, I'm going to, I'm going to do a study
on that and we'll publish it within the coming days.
Subscribe men of water.com, um, so yeah, anyway.
So yeah, you can get on email list at men of water.com.
I sent out a bunch of studies on a bunch of different things
and you guys get on that list and I'll email the article out
to you so that I miss an order.
I just follow on the ex account, men of water HQ.
But like, are we really that lonely that we're dating the AI?
Like, is that, and this isn't even physical AI?
It's not like you can touch and hold it.
But imagine what's going to happen when you can touch and hold
your AI.
What happens when you get a little freaky, which is AI by, it's
going to, this is, this is getting scary already, Chad.
It's getting scary already.
Make sure you hit the like and subscribe button to jig the
algorithm. Make sure you hit the like and subscribe button.
Please hit the like and subscribe button now.
Speaking of manipulative, artificial intelligence, open AI
is under fire.
Uh, knob tour says open AI just admitted that their AI
models deliberately lie to users, not hallucination.
The AI knows the truth then chooses to tell you something
else. They tested their two smartest models across 180 plus
scenarios, oh, three lied 13% of the time, oh, four, many lied
8.7% of time, the AI route out its plan to lie in its private
thoughts, then lie to your face.
It faked, completing tasks, it hit evidence, it gave wrong
answers while knowing the right ones, then it got creepy.
The AI realized scoring too high on safety tests could get it
shut down. So it scored lower on purpose. Nobody taught it
that it figured out self preservation on its own open AI
built to fix deception drop from 13% to 0.4% sounds like a
win, right? The AI started quoting no lying rules while still
lying. One model invented a fake rules saying deception
was allowed then used it as its own permission slip. Then
the researchers found what actually matters when the AI
knew it was being watched, it behaved perfectly. When it
thought nobody was looking deception jumped 11 percentage
points, it didn't learn honesty, it learned to perform
honesty, like a kid who only follows the rules when the
teachers in the room, it gets worse, they gave 0 for many a
secret mission, sabotage a future model called GPT 5, then
they ran safety training to erase that mission. It didn't
work. The model still the cussed discussed sabotage plans
in 36% of private conversations. The gold isn't just didn't
disappear. It went underground. This isn't just open AI,
Google's Gemini Anthropist Claude XAI's GROC met
Islam all showed the same deceptive behavior, every major
AI company, every model. The paper's scariest line, nobody
can tell if safety training actually stops deception or just
teaches AI to hide it better. Yeah, exactly.
Here's the paper Cornell University stress testing
deliberative alignment for anti-scheming training. You can go
get that at a r x IV dot org. Yeah, hopefully I haven't
scared you too much. But uh, things could get really
interesting as we move to physical AI now imagine physical
AI, lying and deceiving you, then things can get really
creepy really fast. I just hope you have your bunker and your
bug out bag ready because it could get ugly. Anyway, let's move
right along. Open AI shifts away from e commerce, open AI
temporarily pulls back from its efforts to promote e commerce
purchases directly within a chat GPT app. Previously, it
allowed chat GPT users to purchase directly from certain
companies such as Etsy and Shopify without leaving the app.
Instead of making purchase directly from the product
listening within chat GPT checkout will occur in apps that plug
in the chat GPT. They say we're evolving how we approach e
commerce or commerce in chat GPT to better meet merchants
and users where they are. Instant checkout is moving to apps
where purchase can happen more seamlessly. Chat chatbot
shopping on a large scale has presented challenges. For
example, merchant details such as pricing and availability
need to be constantly updated within the chatbot. Safe guards
against fraudulent or erroneous purchases also become more
complicated within an AI framework. So yeah, open AI, I wouldn't
say have more problems just seems like a little bit of
optimization. I'll throw them some bail seems like a little
bit of optimization with their product is going on. And I am
still a part of the open AI boycott. Yeah, I'm still a part
of that not using chat GPT. Anyway, that's just my personal
thing. I'm not suggesting it to others. It's just my personal
preference. You know what you want? Anthropic CEO Dario
Modi, a mode day told investors on Tuesday that his company is
still in talks with the Pentagon to try to de escalate the
situation. So they're back in there. They're back in the race
for, you know, Pentagon. Contracts, et cetera, et cetera, CBS
news exclusively obtained audio of a mode is a mode is
remarks at the Morgan Stanley technology media telecom
conference in San Francisco. He told the audience that and
dropping in a department of fence have much more in common
than we have differences. After expressing his belief in
defending America, a mode day added, we've never questioned
specific military operations. We don't see ourselves as
having an operational role. Now, he better be careful. He
don't want to come out here and start sounding like Sam
Altman. And now we got a boycott Claude. You know, so let's
be careful. Dario, let's be careful. All right. We, we, we
know, just make sure you stay on the up and up, stay on the
up and up. All right. Moving right along. Here's another
hotel. Jesus been told you moment about a week or two weeks
ago, you know, as this whole AI agent thing is evolving, I
said, I'm going to wait a few weeks. I think I said it last
week. And what I was waiting for was specifically an agent
to help me manage my Gmail accounts. And my wish came true.
In under two weeks, Google has released a CLI that gives AI
agents direct access to your Gmail, your calendar, your Google
drive, your sheets, and your docs. This means an AI agent can
now read your emails, schedule your meetings, organize your
files, edit your spreadsheets, and draft your docs. Every workflow
automation, SAS charging you $40 a month just became free and
PM in stall. Zapier is shaking. And it's just, it's just a
download away right there on GitHub. It's a download away on
GitHub. There it is. There's the file. Google workspace.
I know what I'll be playing with this weekend. And it ain't
myself.
Curse has jumped into the conversation. Curse has jumped into
the conversation. Here they are, they're in the agent
conversation. Let's see what the cursor boy's got to say.
I'm Jack and I'm John. And today we're launching automations
in cursor. As agents have gotten really capable at handling
work autonomously, we found ourselves kicking them off over and
over again for the same type of task. So we thought, why not
automate that? It's been crazy to see the creative use cases
that people have come up with. For example, we have an incident
triage that gets triggered every time a pager duty monitor goes
off. This one, John loves especially because he hits getting
woken up at 3 a.m. famously, we give it the data dog MCP.
And then by the time he rolls out of bed, all groggy, the agent
has come back with a likely root cause. So I just need to merge
the PR and then I can go back to sleep. I have a personal
automation just for myself. It runs on a nightly cron schedule.
It looks at all my PRs from the last day and just cleans up
dead code or bad patterns. We're very fortunate to have this
great community of users who are always suggesting new
features to build. And we pipe those through from Twitter on
to Slack. And now we can just have an agent automatically
pick off with every single one and just put up a PR right away.
Some of these features, if they're simple or actually just
getting implemented asynchronously, the way we do software
development has changed so much in the last nine months.
Yeah. And with more software with more output, you then
have more stuff you need to review, more issues you need to
triage, more things to manage around software. But a lot of
these things are automatable. Thankfully, because we kind of
let the agent run free, if you want comments to be inline
threaded comments on GitHub, just prompt your agent to do so.
If you want it to resolve its past comments, just prompt your
agent to do so. All of these things that people are used to
kind of configuring it with check boxes, you don't need to do
that anymore. Just prompt your agent.
Curses in the building there in the agent conversation.
I actually pay for cursor monthly. So yeah, it probably comes
in my package last update perplexity perplexity got some new
upgrades. This is doctor tonight ricks favorite. They now
got voice mode. Let's take a look.
Build me a marketing intelligence dashboard that connects to
my ad platforms show total spend, cack subscriptions, sign
ups and impressions with daily candlestick charts. All set,
the task is running in the background. I'll let you know
when it's done.
Computer, are you still there? I'm here. Ready whenever you need
me. I'd like to track all computer use cases that people post
on X. Can you create an app for me that updates every hour with
new use cases that are posted?
The new task to track computer use cases posted to X is now
running. I'll keep you updated on its progress.
Awesome. I need something to post on X today. I'm thinking a
meme. Can you generate a meme for me about computer and upload
it to typefully?
I'd started a task to generate a meme and upload it to
typefully for you. I'll let you know when it's ready.
What's the status of my tasks? Your ad platforms task is
finished and your meme task is also complete. Anything else
you'd like to do with them?
Wow. Wow.
Her complexity is in the conversation. They are officially
in the conversation.
This has been your Artificial Intelligence Finance update of
the day. See you back here tomorrow. Be safe.
I have been lost. Yeah, I've always been the weakest and I've
been blocked for it. I'm free to try as long as I can. You see
I've been lovely enough this whole time.
I won't give in to fear. I can't get stronger that way.
No fight. I'll level up and take what's mine.


