Loading...
Loading...

Get the top 70+ AI Models for $8.99 at AI Box: https://aibox.ai
AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchafer
Join my AI Hustle Community: https://www.skool.com/aihustle
Welcome to the podcast. I'm your host, Jaden Schaefer. Today on the podcast, we have a number of
interesting stories. Meta has just launched Manus for desktop in this kind of AI agent on your
computer craze. We have Anthropic, which is currently now officially flipped a switch and is
beating OpenAI and Enterprise spending a new startup called Memories AI is building a visual
memory layer for robotics. OpenAI is expanding their government footprint with a brand new AWS
contract. We're going to dive into all of those today in the state of what I think we're kind of
watching these three different layers collide infrastructure enterprise spend and agents. I'm also
super excited to announce that AI box my own startup has officially added video to our platform. You
can now create AI tools with video and you can chat with over eight of the top AI video generation
models. We have bite dances seed dance. We have Google Vio models. We have OpenAI's Sora models and we
have Pixiverse models on there. This is super exciting for us. It's been a big push and we hope to
see what incredible tools you guys build with video on the platform. If you don't already have a
subscription, it's 899 a month and you get access to over 70 of the top AI models all in one
platform for less than 20 bucks a month. I hope this saves you a ton of money and you get access to
all of these interesting new models to test out try and talk with. You can try out the platform with
the link in the description or typing in AI box dot AI. All right, let's get into the episode
today. The first thing I wanted to cover is just the fact that we have a huge story from
Meta's Manus. This is a company they recently acquired and what's interesting is Manus was
sort of going viral. It was a Chinese firm that kind of had to pull itself out of China before
this acquisition, but they just launched a desktop app that is basically bringing their AI agent
directly onto your computer. I think they've seen all of the hype around OpenClaw and realized that
having an eight, you know, they probably had a lot of these AI capabilities, but bringing it
beyond just having it on, you know, a website and into your computer. I think they've seen
the value of that. It's a really big shift. I think these agents were living in the cloud before.
Now they're going to be able to access your files, run apps, organize data, even build software
locally. And I think this is a lot closer to how people actually work, which is why OpenClaw went
so viral. I mean, it's basically the beginning of AI agents becoming our operating system layer.
I think we're going to see a huge shift here. This isn't just answering our questions, but they're
actually going to be doing the work inside of our machines. I think the tradeoff is obviously
obvious. I think more power means that there's going to be more risks, security wise, you, you know,
can now give AI access to your local environment, which is going to have a lot of privacy concerns as
well. I think for some people, right, Meta doesn't have the greatest track record on privacy and on
your data. And so for some people, they might be a little a little concerned about having access
all of your computer files. But at the end of the day, these are really powerful tools. So I'm
going to I'm interested to see what sort of uptake Manus has. This is already a product that's
been doing quite well. And I think this makes it a lot more useful. There's also a new startup
called niv AI. They just raised funding to solve a problem that I think a lot of people are not
talking about enough, which is power AI data centers right now are using tons of power tons of
electricity because GPU workloads spike really unpredictably. And I think that forces operators to
throttle usage or they have to overpay for backup capacity. So what niv is doing is they're
building a system right now to monitor and optimize power usage in real time. Essentially,
they're acting as a co-pilot for data center energy. I think the reason why this matters is because
AI isn't just a software problem right now. It's an energy problem companies that figure out how
to squeeze more output from the same hardware and power constraints are going to have a massive
advantage. I think especially when you look at the state of the world today with everything
happening in Iran and the energy shock that we've seen over the last few weeks, I think energy
is more important now than ever. A lot of people are talking about the fact that AI companies are
going to be very severely negatively impacted if these energy shocks these high oil prices
continue because a lot of this was powering data centers. A lot of this was powering energy.
And AI is literally just a direct pipeline from energy to what we are all using. All of this
stuff has to be run. It takes insane amounts of energy. A lot of the data centers, a lot of the AI
training facilities that we're building. They're told they should be building power plants basically
attached to them because they use so much energy and a lot of people are seeing their local energy
bills increase due to these kind of data center projects. So I think this is a really fantastic
startup and I'm excited to follow along with them. There's a new startup I've been looking into
called memories dot AI. They're building what might become kind of this like foundational layer
for physical AI and that is visual memory. So instead of just remembering text like chat
GPT does, they're basically building a system that's going to help AI remember what it actually sees.
So that means wearable devices, robotics and real world AI systems that can recall visual
experiences over time, right? Because right now if you have a conversation with chat GPT in a month
later, I'm like, hey, you know, for this project, can you help me write a new, you know, some sort of
new document or some sort of new file? It can go and look at my, you know, my history, my
context and remember everything about that specific project from two months ago. Now it's a completely
different situation when we have robots running around in the real world with cameras on them that
are learning and figuring out how to do things in warehouses and eventually in all of our homes with
something like the Optimus robots or the figure robot that, you know, these things are going to cost
like 20 or $30,000. They'll be in our home to do things. And I mean, there's a whole other
conversation of people, if people want that or trust that. And I think inevitably they will once
these things are have improved just like self-driving cars. But I think right now mostly the AI
systems are living in really a digital world. And if AI is going to operate in the physical world,
right? Once we start moving from just having AI on our phone that we talked to to AI in a robot
that's walking around or in our home or in our warehouse, it needs to have memory the same way
humans do. So I think this is very early, but it points to where things are going in the future.
AI that doesn't just respond, but it actually remembers it learns it builds context from world world
experiences just like a human would. And something I would actually expect to see from an Optimus
robot or a figure robot is they would have these, this kind of memory built in. So it's learning
and understanding. And let's say your robot breaks, I would expect that you can transfer memory
from one robot model to the next one you upgrade. So let's say you've had a robot in your family for
you know, 10 years that's been helping out and it understands how to do everything inside of your
home. I think you'd be able to transfer those memories to the next iteration of the robot,
which is really fascinating. And I think a really strong moat between switching between robot
companies in the future. Now, I know this sounds crazy in the future, but I think these are the
problems that people are starting to solve now. So I'm excited to see what memory dot AI does
with this memories dot AI. Okay. And the topic is now capturing over 70% of new enterprise AI spend
is according to ramp data. And I actually love these types of reports from Mercury or ramp or even
a lot of different banks will put out these these types of reports. But I mean, basically they have
access to what companies are actually spending because they have you know insights into their
financials. And so, you know, this is really solid data coming out of ramp. Just a couple of
months ago, it was a really tight race between open AI. But now, anthropic is pulling way ahead.
And they're actually moving ahead fast. If you look at the charts, it's phenomenal.
Their business is playing. I mean, their businesses are basically paying all this money for their AI
coding tools. I think primarily, but a lot of people are just using paying for cloud for regular
chat tools. At the same time, open AI is reportedly rethinking their strategy. So the shifting
more focused towards enterprise after they've been heavily investing in consumer products.
I think right now the AI race isn't just about who has the coolest demos. It's about really who
makes money. Enterprise adoption is a real scoreboard. And I think right now, like people,
everyone's like, wow, open AI has this, you know, massive user base, which is true. Almost,
you know, nine hundred almost a billion weekly active users. Last I checked, it was 900 million
weekly active users. I think Sam Altman, I mean, I know he was just throwing some shade, but he said
something recently, which was like, there was more free chat GPT users in Texas than like
all of Anthropics users in the US combined or something like that, which is sort of crazy.
But if you look at the revenue numbers, they're not that far apart. Opening
I said that they're on pace to generate about $25 billion in revenue this year. And Anthropic
is on pace to generate about $19 billion. So these companies are much closer than you'd think
when it comes to revenue. Okay, there's a massive story unfolding with open AI right now. They're
expanding their government footprints through a new deal with AWS. I think on the surface,
it just looks like another partnership, but I don't think that is actually the case. So Open AI
has signed a deal to distribute AI products in the US government through AWS. And that includes
access inside highly secure environments like GovCloud and even classified regions handling
really sensitive workloads. Now we've seen that Open AI and the federal government, particularly
the Pentagon, have had a huge falling out or sorry, Anthropic in the Pentagon and Open AI kind
of stepped in and took a lot of those contracts. And so I think what is actually happening is
that Open AI is trying to insert themselves directly into the most important distribution channel
for government AI, which is AWS because AWS already has really deep relationships across
all federal agencies. They are already compliant. Their infrastructure is something that's already
trusted. And so by Open AI plugging directly into AWS in this new partnership, Open AI is not
just selling their model. They're becoming part of the default procurement pipeline for government AI.
I think what gets really interesting for me is that AWS is already heavily tied to Anthropic.
They have Amazon has invested billions into them. Many of the early investment rounds of Anthropic
were led by Amazon and AWS famously when AWS was trying to compete in the early days with
Amazon put in $4 billion and since they've been many multiples and much more than that.
But cloud is deeply ingrained into AWS's AI platform because of that. And so this was supposed
to be Anthropics kind of like home turf, but now Open AI is stepping directly into that system.
I don't think it's just a competition. It's really a platform level battle happening inside
that same infrastructure. Government adoption right now I think is kind of acting as a bit of a
signal to the entire market because if your model is trusted for classified and sensitive
workloads, then that credibility is really going to be able to spill over into a lot of enterprise
deals specifically, right? Like for users, I don't think users are like, oh my gosh, the government
uses Open AI like that should be my default model. But for enterprise, I think that's 100%
the signal that they see, right? It's like if the government, the most classified, you know,
quote unquote organization in the world is using this. This is probably something that's great for
enterprise. So I think it really is expanding right now. Open AI is reached into federal agencies
and it's also strengthening their position with enterprise customers who care about security
compliance and long term stability, right? Let's be honest. I think a lot of people saw the
Anthropic deal falling through and felt like the company perhaps was a little bit less stable.
Now is that true or not? I'm not, you know, speculating on that. I just think that's what a lot
of enterprises I heard saying. So at the same time, Open AI is keeping control. They decide which
models get deployed. They coordinate directly with customers and they can enforce additional safeguards.
So this isn't them handing over the keys, you know, to AWS and saying, you know, you're the
distribution layer. You you let people do whatever you want with the product. I think right now the
winners are not just the companies with the best models. It's the companies that control distribution
infrastructure and trust at scale. And so if Open AI is making this deal with AWS, I think that's
kind of the first step forward here. Guys, thank you so much for tuning into the podcast today. If
you enjoyed the episode, make sure to leave a rating and review wherever you get your podcasts.
And as always, make sure to go check out aibox.ai. We've just launched eight new video models
into the platform. So if you want to try those all out for only 899 month, you can check out all
of the latest from Open AI's Sora, Google's VO, seed dance and a ton of other incredible models
all in the one platform. There's a link in the description. I'll catch you guys all in the next episode.
Open AI
