Loading...
Loading...

Get the top 70+ AI Models for $8.99 at AI Box: https://aibox.ai
AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchafer
Join my AI Hustle Community: https://www.skool.com/aihustle
Welcome to the podcast.
I'm your host, Jaden Schaefer.
Today on this show, we're talking about some big news
in the AI space.
Number one, Mistral is betting on a build drone AI.
They're taking on OpenAI and Therophic and Enterprise.
Gary Tan has a Claude code setup,
which is getting a lot of people triggered
and a lot of people love it.
The Pentagon is developing an alternative to Anthropic.
Some new reports have shown.
And BuzzFeed right now is developing
what it's called, quote unquote, AI Slop app.
So they're trying to do this to get new revenue.
Google has a personal intelligence feature
that is expanding to all US users
and OpenAI is expanding their government footprint.
And seed dance, the AI video generated company
coming out of bite dance, they actually
are getting some serious heat from Congress, which
is calling on them to shut down over basically a lack
of guardrails.
So we're getting into all of this on the podcast today.
And we're going to do a deep dive on the seed dance story
in particular.
Before we get into all of that, I wanted to mention
some huge news for my startup, which is aibox.ai.
We have just launched video on our platform.
So in the past, you know that you got access to over 40
of the top AI models all in one place.
You could kind of chat with them in a playground.
We had image, text, and audio.
And we have officially now added video.
So we have two models from bite dance, their seed dance.
We have the three different models from Google,
VO, 2, 3, fast, and 3.
We have two different models from OpenAI, including Sora 2
and Sora 2 Pro.
And we have Pixiverse V5 from Pixiverse.
So there's a ton of amazing video models
that are now on Open or now on aibox.ai.
If you don't have a subscription already,
you can get it for $8.99 a month.
Super cheap, way cheaper than any of the other platforms.
You get access to 78 different AI models.
Guys, in the past, you've heard me say 40 models a million times.
I'm actually kind of stuck in that.
But we keep adding new models.
And I just counted right now.
We're at 78 new models on AI box, everything
from text, image, audio, video, more announcements coming.
Tons of new features, subscriptions are going crazy.
And we actually doubled revenue last month, which is amazing.
But if you want to check it out, it's linked in the description,
aibox.ai.
Check out all of the latest new video models.
It's only $8.99 a month.
And we have 20% off if you get an annual plan.
So I'll leave a link.
Let's get into everything happening in the news today.
So the first thing I want to cover
is that Google is expanding their personal intelligence
feature.
They're doing this to all of their US users.
So basically, they're pushing Gemini a lot deeper
into the Google ecosystem.
And I mean, I thought it was pretty embedded in there.
I've actually been impressed because I've
been calling on them to do this for like over a year now.
But basically, they're going to let it personalize responses
using connected data from your Gmail, from your Google
photos.
I think what's interesting here is that it's not just
kind of this premium user experience.
Google is going to widen distribution.
They're going to put this capability inside of AI mode
in search and on the Gemini app and Gemini in Chrome.
So it's off by default, but the product direction
is very clear if you want to get that enabled.
I actually appreciate this is off by default,
because I do think, well, it's great and cool
and personal intelligence is awesome.
I don't think everyone's going to appreciate having
their Gmail and their Google photos automatically
opted into AI personalization, let's just say.
I think the next phase of consumer AI
is not just about better models,
but it's about better context.
We know, right, if you give chat GPT better context on what
you're asking it to do, AKA, like if you're
trying to get it to write your own article, copy and paste
an example of that article or a specific type of document
or file, copy and paste an example, that context
makes the output way better.
I think Google understands this.
And the company right now that plugs into your email,
your files, your browsing, your history, your photos,
they are going to have a massive advantage.
This is something that, you know, chat GPT
had a big advantage because people kind of used them
at the beginning, so it had all of that history.
And it could personalize answers based off
of their past chat GPT history.
Google has way more history and data on all of us
for better or for worse.
And I think this is going to give Google a really big
competitive advantage and they have, you know,
so much distribution.
Now they have this, you know, basically they have a huge
AI moat on just data that they have.
So this is going to be interesting.
The next story I want to cover is that the Pentagon
is reportedly building alternatives to Anthropic.
This should come as no surprise as the big spat
between Anthropic and the Pentagon rolled out earlier
in the last couple of weeks.
There was a obviously very public breakdown
in their relationship and the Defense Department
is now developing replacements rather than assuming
Anthropic is going to remain part of its stack.
I think that is following the broader clash
over military use surveillance and autonomous weapons.
And at the same time, I did see that Congress
is pushing to create red lines on what AI could
and couldn't be used for.
I for one think that if we're going to have this stuff
regulated, Congress is the place for it to happen.
If we're going to make rules about what AI showed
or shouldn't be used for, Congress is the place
for that to happen.
I don't love the feeling that let's say we're,
you know, we're going to go and use a model
that is perhaps Anthropic, right?
Totally cool American model.
They make rules that the military doesn't like
about what they can and can't do
according to the terms of service.
What happens if Anthropic gets,
as a private company gets purchased by,
let's say a Chinese investor or a Russian investor
and all of a sudden they can kind of manipulate
the terms of service of the company,
which is being directly used by the military.
So I mean, I'm sure the government would block
any sort of acquisition by that,
but it just really, I just don't like the companies
themselves creating the terms that the government
has to follow through.
So I appreciate that Congress is looking into this.
And I hope that Anthropic doesn't, you know,
doesn't get wrecked too hard financially
from that decision.
It'll be interesting to see what happens in the future.
I know a lot of consumers are kind of supporting the company
because they agreed with some of the reasons
why Anthropic had to follow it with a Pentagon.
So we'll see.
I do like Anthropic.
I do like Claude, one of my preferred models
for really high quality outputs,
but I just don't think it's a good precedent
for American tech companies to basically
make the rules that I think Congress should
on, you know, military use or what the government should be doing.
All right, the next thing I want to cover
is that Mistral has just launched
what they're calling Mistral forage.
This is an NVIDIA GTC.
It's one of the most important enterprise AI product moves.
I think that I've seen today forage is basically designed
to let enterprises and governments build custom models.
So it's going to be trained on their own data,
not just kind of like lightly fine tuned.
I think with all of this,
Mistral really is betting that companies want a lot more control.
They want a lot more customization.
They want a lot less dependencies on someone else's
kind of black box road map.
And so I think right now, Mistral is trying not just
to win the consumer chatbot race head on,
the going after a part of the market where control, governance,
kind of like multilingual performance, long-term ownership.
A lot of that matters more than just rock consumer mind share.
And I think that's important because if Mistral is really
on track to surpass a billion dollars
in annual recurring revenue this year,
then it's definitely going to be a serious enterprise challenge
to kind of having the opening eye
or anthropic kind of duopoly narrative
that we see right now.
And of course, I think we just,
like Mistral isn't going to win,
especially not in the United States as a French company.
So perhaps in France, it's the most popular,
but in the United States,
this is not the most popular chatbot for consumers.
So they really got to focus on going the enterprise route.
We've seen this from other players like Co here.
All right, the next bit of tech drama I have for you
is a culture battle for Gary Tan's Claude code setup.
It went viral, it had almost 20,000 GitHub stars,
had 2,200 forks after he basically
opened source to his workflow.
A lot of supporters say that like,
hey, this is super legit.
Haters were saying it's basically just an overhyped prompt package.
Something I thought else was interesting.
Buzzfeed is launching a wave of AI powered content apps.
They're trying to basically unlock new revenue streams.
So they're doing things like quizzes, content generation,
a lot of personalized media experiences driven by AI.
This is kind of interesting for me right now
because media companies are not debating right now
whether to use AI.
They're basically just experimenting really aggressively
with it in order to survive.
So many of these media companies are launching
lawsuits against open AI and anthropic saying,
look, you guys scraped us.
Now people don't need to read our content anymore.
And I mean, there's all sorts of arguments
that they're trying to make.
But at the end of the day, they know that AI and the age of AI
is shifting how people read the news,
how they get information, how they see ads.
I think the problem is that most of the content right now
and a lot of the content risks become
what the internet is already complaining about, right?
There's this kind of low quality AI slop.
So the business model from Buzzfeed right now
in this kind of wave of AI powered content apps
isn't very clear, but they're obviously experimenting.
It'll be interesting to see if they're able to actually
make money off of this.
So the biggest story that I've seen today is it's in politics
and it's basically a preview of I think what AI regulation
is about to look like.
There are US senators that are now calling on bite dance
to quote unquote, immediately shut down.
They're AI video apps.
Seed dance 2.0.
Seed dance basically lets users generate AI video.
And you can make these videos of real people
or you can do things of videos of something
that is like a licensed character
and not inspired them by them, not loosely based on them,
but you can directly use their likeness.
So basically we're talking about content
featuring people like Tom Cruise, Brad Pitt.
You could do someone from like stranger things, right?
And you could basically generate all of that with seed dance.
Now, seed dance is a big app as it's incorporated
straight into CapCut, which is one of the biggest video
editors in the world.
And by the way, I also have seed dance.
We just launched video on AIbox.ai.
So we have it over there if you want to try it out.
But right now they're getting a lot of heat
because two senators, one Republican, one Democrat,
they both sent a letter to bite dance
saying that this is one of the clearest cases
of copyright infringement they've seen from an AI product.
And then basically they're just saying,
shut it down and put real safeguards in place.
I don't think it's just politicians.
I think Hollywood is probably lobbying this pretty hard.
The Motion Picture Association apparently sent a cease
and desist.
There's lawsuits that I think are probably coming from this.
And bite dance right now has already
paused the global rollout, where they're trying to deal
with some of the legal fallout of this.
And I'll be honest, I've actually
tested seed dance 2.0 and I was really impressed with it.
I was a little bit shocked that you could, yeah,
generate a video of Tom Cruise and there was,
I don't know, no sort of guardrails.
Personally, as a user, I was like, wow,
this is super cool, this is the first video model
that feels like you just do anything I tell it.
But yeah, we're about to get that nerfed a little bit.
And maybe it's for good, and I don't know,
for me as a user, it's kind of disappointing
but whatever, maybe that's all good, right?
AM models are obviously trained on huge amounts of data,
and that includes cooperating materials, right?
So when you're sucking in all the video on the planet,
you're inevitably going to get tons of clips
of Tom Cruise and Brad Pitt and all these other actors.
Up until now, I think most of the debate
has been pretty theoretical because OpenAI
doing something like Sora or Google VO3
has been, these are very responsible companies
or at least they're trying to be in how they do that
with Sora, all of the kind of quote unquote, deep fake videos
that you'll see coming out of those are people
that are giving their permission generally.
So you can kind of clone yourself and allow yourself
to be remixed on Sora too if you want to do that.
But it's not something that I think by default,
you can do super easily.
So there's kind of these guardrails
that other people have put into place
and see dance evidently did not do it.
So I think right now on the one side,
governments do not want to slow down AI innovation,
especially when they're competing with countries,
we're competing right now with countries like China.
So I think like we are being pretty aggressive on building
with AI in the United States right now
with a lot more than a lot of other countries
like Europe that have already put a lot
of heavy regulation in there.
But on the other side, if you ignore,
I think all of the intellectual property
and personal rights, I know I kind of complain about it
and I'm like, oh man, is a user, it was super fun,
but yes, I get it, right?
Like we can't have all the intellectual property
completely ripped off.
That being said, like these Chinese models,
like see dance, I'd be curious if they put guardrails
on just the American models and the Chinese models,
they just let anyone do anything with.
And it's honestly a very realistic possibility
in world that we live in.
So I think what we're moving towards is not really
this kind of targeted, or just kind of like a sweeping
AI regulation all at once,
we really have this kind of targeted enforcement
if you build a tool and people don't like it,
all of the lobbyists, yell at the senators
and all of the senators, write letters
and then you like kind of got to shut it down pretty quick
and you have like these pressure campaigns.
And anyways, the regulation is really kind of crazy right now
and they're trying to move it to break next speed.
I think right now, bite dance is gonna lose in the short term
because the product is now under a lot of scrutiny,
now they're gonna have to delay it.
I don't think this is the last time
we're gonna be hearing about this.
I think this is kind of a problem
that we'll be going on for a long time into the future.
And also like let's just be honest here,
whether you agree with not having these copyrights
in intellectual property and personal rights
inside of the videos,
there's gonna be a lot of these open source models
that will allow you to do this regardless.
And there's basically nothing we can stop.
They'll come out of China, they'll come out of a lot of,
I mean, basically they'll come out of China
and you'll be able to make clones of people.
And I think there's all sorts of terrible sides of that.
So I'm not really trying to be a doomer,
but I mean, I'm trying to be realistic.
That's what's gonna happen.
So I'm curious to see how it plays out.
We'll obviously regulate a lot of the major hyperscalers,
but beyond that, it's not like the technology
is getting bottled up or the regulation is going to,
I mean, basically do anything.
Even when it comes to like voice cloning,
there's the QN3 TTS model,
which came out of Alibaba.
It's an open source model.
You throw it on your computer, there's no verification.
You upload three seconds of anybody talking
and you can clone their voice.
Now, for me personally, I've actually used that model
because I'm like, oh, sweet, I can run a voice clone
on my laptop.
I don't have to pay thousands of dollars to 11 labs.
It's like, so as a user, it's useful.
Like at what cost, but I mean, it's out there anyways.
So I'm just gonna use it.
But yeah, I mean, I just think it's an important thing.
I'm not saying whether that's good or bad.
I mean, probably it's bad,
but there's nothing we can do about it.
So be prepared for a world where the intellectual
property rights can get regulated for these big players,
but there's gonna be the open source models out there
for everything and there's a lot of implications for that.
Thanks for tuning into the podcast, guys.
If you wanna try out all of the latest models,
including all of the new video models
that we've added to AIbox.ai, go check it out.
There's a link in the description.
You can get started for only $8.99 a month, guys.
It's cheaper than basically any single AI platform out there.
And you get access to 78 different AI models now.
We're gonna keep adding more.
We're super excited to the platforms growing like crazy.
Let us know if you have any feature requests.
We're trying to add them like madmen.
But yeah, we just added music last week.
We added video this week
and we have more exciting stuff coming next week.
So stay tuned.
Thanks so much for tuning in.
I'll catch you in the next episode.
AI Insights: AI News, Eyewitness Accounts
