Loading...
Loading...

Get the top 80+ AI Models for $8.99 at AI Box: https://aibox.ai
How I Grow and Scale My Business with AI: https://www.skool.com/aihustle
Welcome to the podcast, I'm your host Jaden Schaefer.
Today on the show we are talking about Project Glasswing
from Anthropic.
They just tweeted this out like an hour ago.
They said introducing Project Glasswing
an urgent initiative to help secure
the world's most critical software.
It's powered by our newest frontier model
Claude Myfos preview, which can find software vulnerabilities
better than all but the most skilled humans.
Okay, there is this crazy project.
It's not released to the public.
They're sending this out to security researchers
and they've pledged $100 million essentially
to big companies like Microsoft to go and test
all of the open source software,
all of the software in the world
to find the vulnerabilities and fix it
before they release this to the public
because they said basically,
this is going to be an existential crisis for code
because everything can be hacked
and there's vulnerabilities everywhere that can be found.
So they're trying to give it to the security researchers
to fix everything before they release it.
And it's not just for software.
This is just a general insanely good model
but that's just something that they're concerned about.
So we're going to get into all of that on the podcast
without too much doomerism.
I think there's a lot of optimism
but this is definitely a absolutely massive drop in model.
And speaking of AI models,
if you want to test all of the top AI models,
everything from anthropic to open AI to Grock to Gemini,
to 11 labs for audio, tons of cool image models,
go check out my startup AIbox.ai for 899 a month.
You can access to over 80 of the top audio,
image, text, video models,
open AI as Sora, which is going to get discontinued
because it costs $130 to generate a video for them
but for you, it's very cheap.
So if you want to check it out,
go check out abox.ai.
Hope that saves you a ton of money
and you get access to everything in one spot.
All right, let's talk about what's going on with anthropic.
So they just released this,
what they're calling, of course,
their quote, most powerful model yet.
Now it's interesting as usually everyone's like,
this is our most capable model.
This is most powerful.
This sounds a little bit ominous.
There was a leaked memo where they were actually calling it.
That's what they told the world.
This is just kind of internally.
And basically this right now is limited to,
it's just kind of a debut for a bunch of the top organizations
as part of a new security initiative
in which there's 40 partner organizations
and they're all deploying the model across
a bunch of different quote unquote defense security work areas.
And they're basically trying to secure critical software
before this goes out to the general public.
I think they didn't specify exactly what this was trained on.
So they're not saying like,
hey, we specifically trained this on like cyber security work
or kind of like source code.
But right now the preview that they're sending out to everyone
is being used to scan both first party
and open source software systems.
They're looking for code vulnerabilities
and they're just giving this out
to a lot of the big organizations.
What they're saying right now is that over the last few weeks
they were using it internally
and they were able to identify quote thousands
of zero day vulnerabilities.
Many of them critical.
So they're saying a lot of the vulnerabilities
are one to two decades old.
So they have this new software.
They rent on code bases
and they're finding vulnerabilities
that have been around for 10 to 20 years
in literally everything.
And if they're just they're concerned.
They really can't release this to the public
because they're like, as soon as we release it to the public,
security of basically all software is going to explode.
So now they're trying to get this out to people
that can fix it before they release it.
It's like the model's so powerful.
They can't release it until we fix all the software in the world.
And so they're like, okay, everyone,
we really want to release this new model
because we're probably gonna make a lot of money
and be open AI.
But like, we're not held back by anything other than
that we're gonna destroy the entire internet
and all software combined.
So honestly, that's a pretty wild point.
And I mean, this is just crazy.
Apparently, this isn't just like a software model.
It's a general purpose model.
It's the new tier.
So they have like Opus, Sonnet.
They have these other tiers.
This is gonna be Mythos and it is kind of the next highest tier.
They have a higher tier.
I guess they're not continuing with the Opus model
or the, you know, Sonnet model.
But kind of the Opus being the best,
they actually are creating a new thing
because it is such a big step up,
which is really interesting.
It has really strong,
agentic coding and reasoning skills.
So everyone using Claude cowork,
which I've been shouting from the rooftops
and Claude code recently are going to love it.
And it's basically the most sophisticated
and high performance model.
It can do complex tasks
and it can do a lot of agent building and coding.
So who is, and for topic, giving this to in order to go
and, you know, put it out onto all of the different,
you know, test all the code bases in the world
and fix all these vulnerabilities.
They're giving it to Amazon, Apple, Broadcom, Cisco,
CrowdStrike, the Linux Foundation, Microsoft
and Palo Alto networks.
All of those people are going to share what they've learned
from using the model so that the rest of the tech world
can benefit from it.
It's not gonna be made publicly available.
So we don't know exactly when they're gonna actually
launch it, it feels it's kind of like a wait and see.
They're like, look, we're giving this
to all of the biggest tech companies.
We're gonna see what they can do with it,
what they can fix with it, what they can teach us about it.
And then we'll basically decide on how and when we get
this out.
Anthropic says that right now they have engaged in,
quote, ongoing discussions with a bunch of federal
officials about the use of my thos, although one would
imagine that I think a lot of those discussions are
pretty complicated by the fact that Anthropic and the
current administration are having a whole bunch of
legal battles pentagon labeled the AI lab supply chain
risk because Anthropic didn't let them use their AI model
for autonomous targeting or surveillance.
And basically they had a bunch of different rules and
they didn't want to follow them.
Or maybe they just didn't want to precedent of having
rules. I think we probably a fair characterization.
But in any case, news of this rate now is originally
something that got leaked a little while back.
We kind of reported on it.
There was a data security incident that got reported
by a fortune and there's a blog with some, you know,
some is like an unpublished blog draft somewhere that
someone found and it alluded to this.
So we kind of knew that this was coming.
We didn't realize how wild this was.
Basically, Anthropic attributed that leak in particular
to quote unquote human error.
So they're like, look, it wasn't like an AI model leaking
this. The AI model didn't do it.
But what they did say is that Capi Barra is the new name
for a new frontier of model.
It's larger and more intelligent than Opus.
So it's actually going to be called Capi Barra.
That's, I guess, their latest.
I don't know where they get the names for these.
It's almost as bad as Barra, in my opinion, but whatever.
So Capi Barra is going to be better than Opus,
but MyThos is kind of the umbrella of models, right?
They kind of do these pushes where they'll make an umbrella
of models and they go from best and then like medium
if you want to save power and then kind of worse
if you want to like run it on locally or on an edge device
or something like that.
So Capi Barra is the new one.
It's getting replaced Opus.
And according to all these leaked documents,
it is quote by far the most powerful AI model
we've ever developed in this leak anthropic claim
that this new model was going to far exceed the performance
in areas like software coding, academic reasoning
and cybersecurity.
And evidently, the cybersecurity was one of the big areas
they were concerned about because now they're
pushing this, you know, making this big push.
When they released it, you know, if you kind of look
at some of the current public models, we, like sure,
you could use something like ChatGbT or Gemini
or anything for some sort of cybersecurity issue.
But it feels like this, this one is so advanced.
They're concerned about the threats
of this being weaponized by bad actors.
It's going to find bugs and exploit them.
And because they found, they just alone
found so many zero-day exploits
and so much, you know, infrastructure and software.
They're, you know, even with bad relationships
with the government, they're giving this to the government
and they're giving this to every major organization
and telling them look like you use this
and try to fix it for something like this gets out.
And the other thing that I think is important
is everyone's like, you know, well, why don't they just
like not release it if it's so dangerous.
Why don't they keep it forever?
And the reality is these models are all getting better
and better and someone in China is going to make
an open source version of this and release it either way.
So I think it's an, everyone's best interest to take this,
use it to fix the software as fast as possible
because if anthropic was able to create it,
other people inevitably are going to be able
to create it eventually as well.
Last month, Anthropic accidentally exposed
about 2000 source code files and more than half a million
lines of code that was kind of linked to a mistake
in the launch of version 2.188 of Cloud Code
and their software package.
The company accidentally caused a 1000 code repositories
and get help to be taken down
because they were trying to clean up the mess
and they're launching cease and desists.
In addition, I think what not a lot of people know
is that anthropic right now is absolutely exploding
in revenue.
If you take a look at the numbers,
just how fast Anthropic is growing right now,
they put out a tweet a couple days ago
where they said, our run rate revenue has surpassed
$30 billion up from $9 billion at the end of 2025
as demand for Cloud continues to accelerate.
This partnership gives us the compute to keep pace.
So the revenues exploding,
they're making a lot of these partnerships.
I mean, the end of 2025,
they were at $9 billion in run rate and run rate revenue
and now they're past $30 billion triple.
Since the end of last year, I mean, we're three months in.
So this is absolutely exploding, open AI is concerned,
everyone's concerned.
Obviously, the software industry is concerned
with what is coming down the pipe here.
Something that's interesting is,
as far as that whole $9 million at the end of last year goes,
they said when we announced our series G fundraise in February,
we shared that over 500 business customers
were each spending over a million dollars
on an annualized basis.
Today, that number exceeds 1,000,
doubling in less than two months.
That is crazy.
Their growth is absolutely astronomical
and I think where anthropic really crushed it,
opening AI is kind of targeting the everyday user
who maybe will spend $20 a month
and many will just do it for free.
And anthropic is targeting business users.
I personally am spending hundreds and hundreds of dollars
a month on it, loving every second of it
because I'm getting so much done.
But I think they know their customer,
they're finding the power users
that are really pushing AI to its limits.
And I mean, even in the case of giving it
to all these cybersecurity people,
it's like they basically made the problem.
They're like, we made a model so good.
It discovered all of the cybersecurity issues.
And now you need to use our model
to fix the problem that we basically made.
And so they're giving it out,
but they are, to in all fairness,
they are pledging about $100 million.
They're giving to all of these different companies
and credits and tokens and they're like,
look, we're gonna give you guys $100 million
to run through and fix all of the software in the world
because we know we made this problem
and we wanna get the model out.
So that's pretty fascinating.
They're literally paying Microsoft
to fix the security vulnerabilities of the world
because their model is about to crush it.
Everyone, thank you so much for tuning into the podcast today.
If you enjoyed this episode,
I mean, this was an absolute wild ride.
Make sure to test all of the latest models
from OpenAI and Theropic Gemini,
test them side by side.
I think this is so important
to understand the capabilities of all of these models.
And you can do that at AIbox.ai for $8.99 a month
and you also get audio, image, video, everything in one place.
Hope that it is a phenomenal product for you.
Put a lot of blood, sweat and tears in the suit to it.
So let me know what you guys think.
Hope you have a fantastic rest of your day
and I will catch you in the next episode.
Open AI
