Loading...
Loading...

If you want to get access to this episode,
and my next 30 episodes all add free,
so there'll be no ads on them,
go check out my podcast AI chat.
You can go search for that on Spotify or Apple.
It's AI chat.
I'm gonna post all of these news episodes,
and I'm also posting interviews,
like I just interviewed the CEO of Cohe here.
They've raised over a billion dollars for their AI model,
talking about what they're gonna be spending the money on
and the direction of the AI industry,
along with all of this new stuff.
So if you want to go check it out with no ads for free,
it is AI chat.
Elon Musk and Sam Altman are both in federal court
in Oakland for day three of the $130 billion trial
that could force opening AI back into non-profit status,
and also remove Sam Altman from the board.
Before that,
Stanford's 2026 AI index says transparent
to see on frontier models just collapsed
from 58 to 40 out of 100.
Runway CEO is positioning the company behind AI video
into world models at a $5.3 billion valuation,
X Twitter CEO, Prague, Argoal,
just tripled his agent infrastructure startups valuation
to $2 billion in five months,
and the White House is drafting an executive action
to quietly walk back its anthropic ban.
We're gonna get into all of that on the podcast today.
The first one I wanted to cover was the Stanford story.
So Stanford HAI just dropped its 2026 AI index,
and we're seeing something very interesting in the report.
One of the things that the foundation model
transparency intra index is tracking or showed
was that the average score dropped
from 58 to 40 out of 100 in the last year
for how transparent these AI model companies are.
The direct quote from the report is quote,
the most capable models are now the least transparent.
So we're talking about Google, Anthropic, Open AI.
All of them have stopped disclosing their data set sizes,
and also they stopped saying how long
their training duration was on the last models.
So we basically have China right now
that is narrowing the US capability.
They're at, you know, they're like 2.7% behind.
We also have generative AI hitting 53% in US adoption.
That's faster than the PC, you know, or the internet.
And at the same time, we have all of the labs
at the top publishing less than they did a year ago.
Stanford's Russell Wald said on X,
capability is going up and the ability to understand
the systems is going down at the same rate.
So if you're an enterprise buyer,
this is something I think that is fascinating.
When your procurement team asks for, you know,
model card data, the answer is increasingly going to be,
we just don't share that anymore.
And I think this is a very interesting new move.
Next up, we have the runway of CEO,
Cristobal Venzuela, who went on a podcast recently
with Rebecca Bayland, I believe it was the TechCrunch
Equity podcast, and basically said what a lot of people
have been, in fact, I think I said this
a couple days on my show, but basically,
the CEO of Runway, the biggest video AI company,
said that AI video is basically a feature
on a much bigger project or product.
And the bigger product is these world models.
So being able to, cause in order to make this AI generated
video, you have to basically have these physics models
to understand everything going on in the world,
using tools like blender, et cetera.
And, or, you know, that's one way
some people have done it in the past.
But this is a much bigger thing than just, you know,
the AI generated video would appear.
You have to actually understand so much
and that, that data set and kind of that model
being able to create these world models
is what they're calling, is incredibly valuable
for robotics.
So Runway right now is sitting at a $5.3 billion
valuation.
They have $860 million raised to date.
And I mean, if you line them up against
Sora and Vio in the video generation tier,
I think it's interesting.
The argument that they're making right now
is that the real product surface isn't just like
the videos that are generated.
It's kind of this non-linear media.
That's what they're calling it right now,
which is basically real time generated content
that responds to a viewer or player
instead of playing back a fixed sequence.
So I think the line that really stood out to me
is the, what they said, they said, quote,
the real constraint on filmmaking
has never been technology.
And what I think they're really getting at there
is that, you know, better tools alone
aren't going to unlock new creative categories.
If you just give any random person AI-generative video tools
aren't going to be able to go and make a blockbuster movie
per se, I mean, maybe there's some diamonds
and the rough hidden there, right?
But typically, the people that are making incredible filmmaking
are great storytellers.
The people that have been working in this
that have been honing their art.
And while some random people might have that hidden skill set
in them, it reminds me of at the Olympics,
when you had the random dude from Turkey
that went and he did the shooting Olympics
and he crushed it and, you know, he just kind of picked it up
on the side, maybe there's some people like that
with filmmaking out there, right?
But I think this isn't something that's incredibly common.
So there's definitely a huge human element to it.
And I think we talk about that a lot.
But the element we don't talk about a lot
is behind the technology, not just the plain video generation
model, but the actual physics behind all of it.
So this is something that Jan LeKun
is making with his new world model labs.
It's the same bet that FIFA Lee's world labs is making.
And the same one that opening eye just signaled by,
essentially when everyone's like,
oh my gosh, they're shutting down Sora.
The big story they didn't talk about
is that the Sora team's resources
are getting put into long-term world simulation research.
So if you're building anything in games,
robotics, interactive media, this is the thesis
that I think all of the smart companies,
all of the smart money is converging on right now.
Yes, like these AI-generated video models are kind of cool.
But the implications of them are much bigger
than just being able to generate clips for filmmakers.
These world models are going to go into robots
and the robots are an absolutely insane disruption
on every industry in the world.
OK, the next thing we want to talk about
is parallel web systems.
This is the agent infrastructure startup
that was founded by Parag Aguil.
It was the guy who was Twitter CEO and Elon Musk fired him
in October 2022.
He just closed a $100 million series
B at a $2 billion valuation.
The series A was five months ago at a $740 million valuation.
So it's basically tripled in five months,
which I think on its own tells you
what is happening at the agent infrastructure layer right now.
Sequoia led this round, one of the top VC firms,
Kleina Perkins, Index, Coach Leff first round, Spark,
terrain capital, all of them are in this.
The company right now runs web search
and also research APIs purposefully built for AI agents.
So basically the layer between the model and the live internet
and our goal says that they have over 100,000 developers
on their platform right now with Clay and Harvey
on the kind of the list of customers that they have named.
The reason why this round I think actually matters,
even though basically every infrastructure startup right now
is raising a lot of money is because agent traffic
is fundamentally different than human traffic.
Agents don't just browse, they hammer endpoints,
they batch reads, they need structured outputs.
So most search APIs were never actually designed for this.
And I think they've basically just built theirs
from scratch around this idea.
In andthropic news, which you know we love to cover,
over at Axios Marina Pitzowski has an interesting story
that the White House is drafting an executive action
to walk back the supply chain risk designation
on anthropic and clear federal agency access to Mythos.
That is anthropic cyber model.
I think this is a clear, you know, 180,
the defense secretary Pete Hegseth labeled anthropic
a supply chain risk in February,
Trump signed a directive ordering federal agencies
off of anthropic.
And now eight weeks later, the chief of staff Suzy Wiles
and the treasure secretaries got best net,
met with Dario Amadeo in what both of them are kind of
saying is like they're like, they all basically said,
this was a super productive meeting.
And now it feels like the administration is walking back
a lot of the, you know, the designations that we're given
and some of the beef that happened with anthropic
looks like it's water under the bridge
because we have bigger problems to solve.
So right now we know that the NSA is already using Mythos
under a separate exemption.
And the reason why they're kind of changing their mind
on this is definitely not like a shocker.
Mythos is the only frontier model
which is purpose built for offensive
and defensive cyber and the intel community
made the case internally that you can't run a cyber stack
without it.
Was this a giant 40 chess play by Dario Amadeo
to get anthropic, you know, used in the government again,
it seems like open AI was a little bit jealous
because they said like, hey, no, we have a cool cyber model
as well that's like too dangerous to release.
I'm not sure where we get with all of that.
But before we get into the next story
which is Elon Musk and Sam Altman's trial,
I would love to tell you about my own startup AI box.
If you're already paying for Chatcha Pt. Claude, Gemini, Grock
or 11 Labs for Audio, any of the image models right now,
I would love for you to check out AI box.
It's what I personally built.
You could access to over 80 different AI models
in one place, basically every frontier model
and a ton of really cool open source models are all on there.
It's $8.99 a month.
So instead of paying $20 for Chatcha Pt and $20 for Claude
and $20 for everything else, $8.99 a month
and you could access to all of the different models
in one place.
So what I recommend to my friends, it saves you a ton of time
and there's a lot of cool features.
There's a workflow builder in there.
I'd love for you to go check it out.
There's a link in the description to aibox.ai.
Okay, the Sam Altman trial day three
just wrapped up today in the US district court
for the Northern District of California in Oakland.
Elon Musk is suing Sam Altman, Open AI
and Greg Brockman for $130 billion in damages
and he's asking the court to do two things.
Number one, force Open AI back into a nonprofit form
and number two, remove Sam Altman and Greg Brockman
from the board.
According to NBC's Rohan Goss Wami,
Elon Musk has been on the stand for two days now
and this whole thing is being like,
they're doing like a live blog of it.
But this is the second straight day
under cross examination by Open AI's lead attorney
which is William Savitt of Wechthel Lipton
who is kind of famously the guy that you hire
when you absolutely can't lose a corporate trial.
And basically the core fact pattern that we're seeing right now
is that Open AI was founded as a nonprofit in 2015
by Musk, Altman and Brockman, Ilia Suskiver
and a bunch of other people.
Elon Musk donated $38 million.
He basically left the board in 2018
after depending on whose story you believe on this.
He either kind of lost the power struggle inside
to make himself the CEO or he was kind of refusing
to bless the for-profit conversion.
But in 2019, the for-profit subsidy was created
and in 2023, Microsoft put in $10 billion.
Today, Open AI is reportedly worth somewhere around 500 billion.
Although if you look at the secondaries market,
it's trading like people are buying its shares
or selling stock options for $850 billion.
So I'm not sure where that 500 billion that CNBC is reporting on
but like people are paying close to a trillion dollars.
And by the way,
Anthropic is trading on the secondary market
for a trillion dollars, which is evaluation, which is crazy.
But either way, Elon Musk claims that the for-profit
conversation or conversion was a breach
of kind of the founding chart that they have
and that his donation got used
for unauthorized commercial purposes.
So I think the big movements from the cross-examiner,
Savit, he basically walked Elon Musk
through all of the internal exhibits
showing that Musk himself proposed a for-profit structure
in 2017 and 2018 with Musk holding a majority control
of the cap table in the board.
MPR's Bobby Allen flagged that as kind of
the most damaging exchange in this trial.
Elon's defense was that in that deal, you know,
he was eventually going to minimize his control
according to the cross-examination lawyer
that wasn't actually on the term sheet.
Something that, so that seems like kind of a loss for Elon,
what is interesting though,
Elon brought up the point that Microsoft was a bit
of a tipping point because the $10 billion donation was,
or investment was too large to be a traditional donation
and that Microsoft was clearly looking
for a financial return.
Savit, the, you know, over at OpenAI,
the counter-argument from him was to point out
that Elon Musk founded XAI eight months later
and that his lawsuit landed shortly after.
So OpenAI's narrative is basically what they're trying
to say is like, Elon Musk lost the for-profit fight,
so then he started a competitor
and now he's using the courts to try
and like stop his competitor.
Casey Newton over at platformer wrote this morning
that the case will turn less on the legal merit
and more on whether the jury believes Musk's harm
is real or strategic.
Sam, all men right now is expected to take the stand next week,
Brockman, Susquever, Miriam Maradi, and Satya Nadella
are all on the witness list,
which is gonna be just wild.
I think the strongest argument against my opinion
or my take that I've given here is basically
the corporate structure case, which is, you know,
I'm pretty important for an AI company.
It has to be able to do something like this
andthropic did the public benefit corporation.
OpenAI did the capped profit subsidy.
I think the simple reality is that you can't really
raise $50 billion as a nonprofit
and the original 501c3 charter was always going to be
revisited the moment that the compute bill became real.
So if you look at it from that standpoint,
then Elon's lawsuit is kind of like selectively enforcing
something that nobody in the industry,
including Elon Musk and XAI I might add,
actually believe anymore.
And I think that is a pretty fair point,
but why I still think the trial is pretty important
is more than what a lot of people are saying.
Basically, I think the legal question,
whether donations made under a nonprofit
can be retroactively used for for-profit subsidies
without donor consent, has a lot of implications
to go way beyond openAI, because if the jury sides
with Elon, then every AI lab that took foundation grants
or research donations under a charitable mission,
kind of has this new way that they have this legal exposure,
like they have more risks there.
And I think that's why anthropics lawyers are kind of
sitting in the gallery.
I think a lot of people are watching this case
because they want to see the precedent that comes out of it.
And I think that's not the way it's getting framed
in a lot of the news, but I think that's what's happening.
All right, that's the show for today.
Thank you so much for tuning in.
If you enjoyed the episode, it would mean the world to me
would really help the show out a lot
if you could leave a rating and review
wherever you get your podcasts.
And as always, make sure to go check out AIBox.ai.
I'll leave a link in the description.
You can go find it.
I'll catch you in the next episode.
Strict Scrutiny
