Loading...
Loading...

Chat GPT.
You either love it or you hate it, am I right?
You love it because it tells you why your back keeps doing that.
You hate it because it uses a boatload of fresh water to do so.
Or maybe you hate it because after open AI trained chat
on centuries of humanity's creative labor,
its leader, Sam Altman, said he wants to sell it right back to us.
We see a future where intelligence is a utility like electricity
or water and people buy it from us on a meter
and use it for whatever they want to use it for.
Cool, but wait, Chat GPT's parent company OpenAI
has the potential to do tons of good too.
Turns out they've got $180 billion of charitable
monies to give away to humanity, to help our cause.
That's more than double what the Gates Foundation has to play with.
OpenAI owes us $180 billion, but are we going to get it
on today's plan from Bucks?
Here we go.
Your dreams can become when you put imagination to work at canva.com.
Learn more at Adobe.com slash do that with Acrobat.
Hey Chat, introduce today it's blamed the podcast.
Of course, today explained is a daily news podcast from Bucks.
Each episode takes a single.
No, just introduce it like you're introducing the show.
Like this is today it's plain.
Ah, got it.
This is today explained.
Show me the money chat.
Sarah Herschander from Bucks is here to tell us where to start.
I think we would have to start back in 2015.
So that's when OpenAI started.
OpenAI began as a nonprofit.
It began as this nonprofit AI lab founded by a few donors,
including some extremely familiar names like Elon Musk and Sam Altman.
It's very important that we have the advent of AI in a good way.
And they founded it as a nonprofit to develop AI in a way that is safe
and that will benefit humanity.
And they created it as a nonprofit lab instead of as a corporation
or as like a for profit startup,
which is normally what we would see for this kind of thing
because they figured that this technology was going to be so transformative
that we need to make sure there's no profit motive involved
and nobody is going to make money off of what we're making.
The reason for our structure and the reason it's so weird in
is we think this technology, the benefits, the access to it,
the governance of it, the long steam, and the whole.
You should like not, if this really works,
it's like quite a powerful technology
and you should not trust one company
and certainly not one person with it.
That was 2015.
Fast forward a few years.
And AI starts getting a lot of buzz
because of a new product called ChatGPT
that OpenAI, the nonprofit lab, developed.
A new artificial intelligence tool is going viral
for cranking out entire essays in a matter of seconds.
We have about 100 million weekly active users now on ChatGPT.
OpenAI is the most advanced and the most widely used AI platform
in the world now.
So over time, OpenAI was saying, you know,
we need a lot more money to be able to do this properly,
cost a lot of money to develop AI.
It costs money to like hire people, the computing power,
all of it costs a lot of money.
So like we need investors.
We can't just rely on donations
and sort of the tax breaks that we get as a nonprofit
to develop this stuff.
And so they created this like what's called
like a capped profit subsidiary,
which was like a little for-profit arm
that they could use to raise that money.
It would still be under the control of the nonprofit
as kind of like the umbrella parent organization.
But they were able to raise some money to some extent.
A lot more money started pouring in.
A lot more interest from investors started pouring in.
And OpenAI was kind of struggling to reconcile
the nonprofit part of their mission
and the fact that they like were becoming this enormous,
one of the most well-known tech companies in the country.
So 2024, OpenAI decided that they wanted to completely
like disentangle themselves from these nonprofit routes.
So they no longer wanted this cat profit model
where investors could only get a certain amount
of, you know, their investment back.
They wanted to be able to raise as much money as they wanted
and they wanted to be able to kind of behave
like any other sort of for-profit AI company would.
Basically, what it wants to do is it wants to become
a Delaware Public Benefit Corporation.
And what that is, it's really just like a traditional
corporation, but with some permission to do some
public goods, spend on public benefit.
The whole reason to Etrovo OpenAI was to build
artificial general intelligence,
but for the good of humanity.
That's why initially it was a not-for-profit.
But then suddenly they realized they needed a ton of money
to be able to access the compute to build AGI
and therefore the awkwardness began.
They were eventually able to come up with sort of a deal
with the Attorney General of California,
which is where the company was based.
That split OpenAI formally into like two arms.
One is like the corporation,
an OpenAI corporation that may eventually go public
and the other is this new philanthropy,
which is basically the original nonprofit
that is now like still the parent umbrella organization
of OpenAI, the company,
but it also has these new responsibilities.
Basically the philanthropy has two jobs.
One is to do grant-making,
so like giving money to other charities.
The other one is to do oversight
over OpenAI, the company.
And then on the side of sort of oversight,
we haven't at least publicly seen the OpenAI foundation
now that it has this sort of formalized role
via this deal with the Attorney General.
We haven't seen them really step up
in a different way, at least not yet.
Can you tell me how OpenAI has sort of made it clear
to the public that this is no longer like a touchy feeling
for the good of humanity operation?
It feels like they've entered into controversy
several times in the past few years.
I mean, I don't think I don't want to speak for them.
I don't think they would identify as not
being a touchy feeling for a good operation.
I think they're actually trying very hard
to still appear that way.
And I don't want to be too cynical here.
This whole deal is like super, super new.
So it is possible that we'll be seeing
a lot of changes coming in the next year or two.
But I think at least like so far this year,
OpenAI has made a lot of headlines
because if it's still with the Pentagon
and the way that it's behaved in these negotiations
and it's competitor andthropic,
which was actually founded by former OpenAI employees
who were disgruntled about some of OpenAI's decisions
about converting from the nonprofit.
OpenAI has come across as the company
that was more willing to negotiate with the Pentagon
in a different way than Anthropic was.
Anthropic said it had two red lines
that it would not cross.
The Pentagon said that it was going to move
to declare the company a supply chain risk.
And so OpenAI stepped into it.
They're going to take this contract
but they want to have some safeguards.
Anthropic came across in that whole negotiation
as a company that was willing to stand up against the Pentagon
to put down some red lines on where it did
and did not want its technology to be used.
Whereas OpenAI simply did not come across that way.
It's unclear exactly what those negotiations look like
but that is at least sort of the,
what the public has taken, I think,
from those interactions.
And then we've also seen OpenAI get into a little bit
of trouble because of some of its lobbying
around AI safety.
It's been opposed to different state-wide AI safety measures
and they say that they do that
because they want a federal safety measure
with their kind of collaborating
with the Trump administration on.
But at the same time, I think a lot of critics
have raised alarms about the fact
that they've been opposed to those kinds of safety measures
which Anthropic, again, this competitor to OpenAI,
has embraced.
So I think at least from the public's perception,
I'm not saying that this is everything
that's going on with an OpenAI
but the perception is certainly not that OpenAI
is stepping forward in a real leadership way around
what it means to be an ethical AI company
specifically given its nonprofit roots.
Okay, so that's what's been going on
on the for-profit side.
What about the not-for-profit side?
Is there anything happening there
with $180 billion of shares, I guess, in OpenAI?
So I spoke to a spokesperson at OpenAI
who says that there is a lot going on behind the scenes
but there is not that lot that we've been seeing so far.
Like I said, we have seen that $40.5 million
going to different community nonprofits
which is great.
I talked to some of the nonprofits.
They're wonderful.
But I think $40.5 million is,
I did the math here on the back of an Appkin
but it's about 0.02% of $180 billion.
And while OpenAI has said that it will be giving
as an initial promise, $25 billion to charity,
falling into two buckets.
One is focused on scientific research and health
and one is focused on what they're calling AI resilience.
We have no idea what that's actually going to look like.
And again, I'm giving OpenAI the benefit of the doubt.
This deal was made in October, $180 billion is a lot of money.
You almost don't want them to start giving away
that much that quickly.
Like you want to see them solely building up their team.
And a really important thing to note
is that the Board of Directors
of the OpenAI Foundation is almost identical
to the Board of Directors of OpenAI.
The corporation, there is one member
of the Foundation Board that is different.
Again, that might change over the course of the year.
But the fact that like the OpenAI Foundation
doesn't have that sort of independent structure just yet
has raised a lot of alarms.
You're saying the people who are influencing
decisions on the for-profit side of OpenAI
are the same people influencing decisions
or a lack thereof on the not-for-profit side.
With the exception of one member, yes.
And when I asked OpenAI about this
and sort of phrased the alarm bells
that a lot of people had about the idea
that these board members could kind of put on a different hat
when they're meeting about the foundation
and when they're meeting about the corporation.
You know, the answer was basically
we have conflict of interest policies
and they know how to do that.
Which trust us, we're professionals.
Yeah, basically trust us.
Which I think it raised a lot of doubts
for a lot of the critics who've been skeptical
about the restructuring.
That was Sarah Hershander.
She's a fellow at Future Perfect here at Vox.
It's a section of the yellow website
that focuses on making the world a better place.
Imagine that.
In a minute, I'll today explain it also from Vox.
We're going to hear from one of OpenAI's most prominent critics.
She's not just skeptical about this restructuring.
She thinks it's illegal.
Support for the show comes from public.
The investing platform for those who take it seriously.
On public, you can build a multi-asset portfolio
of stock spawns and options
and now generated assets which allow you to turn any idea
into an investable index with AI.
Go to public.com slash podcast
and earn an uncapped 1% bonus when you transfer your portfolio.
That's public.com slash podcast.
Paid for by public investing brokerage services
by open to the public investing ink member Finra and SIPC.
Advisory service by public advisor's LLC, SEC registered advisor.
Generated assets is an interactive analysis tool
output is for informational purposes only
and is not an investment recommendation or advice.
Complete disclosures available at public.com slash disclosures.
Let's flex those two.
Draft design to live and make it sing.
AI builds the deck so you can build that thing.
Do that, do that, do that, do that with acrobat.
Learn more at Adobe.com slash do that with acrobat.
Once upon a mundane morning,
Barb's day got busy without warning.
A realtor in need of an open house sign.
No 50 of them and designed before nine.
My bad hurts.
Any mighty tools to help with this pipe?
Ha ha! Barb made her move,
chupin' Kima and got in the groove.
Well, creating canvas sheets,
create 50 signs, fit for suburban streets.
Done in a quick, all complete, sweet.
Now, imagine what your dreams can become.
When you put imagination to work at canva.com.
I can't wait to work with you on crimes.
I'm really excited to dive in and explore all the angles with you.
Catherine Bracy is the head of tech equity.
It's an advocacy group whose main position is that tech growth
should benefit everyone.
She also knows Sam Altman.
We worked together back in the day.
And then kind of went on a touch with each other for a few years.
And then when I was writing a book about venture capital,
I was really interested in OpenAI's nonprofit model.
And Sam had been very explicit that the reason they founded OpenAI
as a nonprofit was to put the technology at arms link from investors
because they knew investors would exploit it in a way
that would make this technology,
which they thought was very dangerous,
actually live up to that potential danger.
And so I wanted to talk to him about the decision-making process
behind that.
And he was very forthcoming about that being, yes,
the explicit reason why OpenAI was founded as a nonprofit.
And they put a lot of thought and capacity and energy
into creating this governance structure
that would protect the technology from the whims
of investors, the incentives of investors,
the imperatives that investors put on technology companies.
And you know, a few months later I saw that all come crashing down
and that has really stuck with me and informs a lot of the work
that we're doing today to ensure that the nonprofit maintains
the mission that it started out with.
We asked Catherine how she felt when she found out that OpenAI
was going to try and have it both ways,
mission-driven nonprofit, but also money-driven for profit.
Disappointment, I would say, was my initial reaction
and then the secondary response was, well, what can we do about this?
And many of us kind of came together into this coalition
that really started asking questions about the responsibility
of the nonprofit and the responsibility of the attorney general
of California to enforce nonprofit law.
And you know, things kind of went from there.
Tell me more about that. What's nonprofit law look like
as it pertains to say OpenAI?
Essentially, you know, I run a nonprofit in the tax code
that means that, you know, my organization does not need to pay taxes
but in return for that tax exemption,
we are required to operate in service of a public service mission.
Our mission is to ensure that the tech industry
is creating an opportunity for everybody.
OpenAI's nonprofit mission is to ensure that AI develops
for the benefit of all of humanity.
And legally, Sam Altman is required to prioritize OpenAI's mission
above all else and that means that anything that is created
under that sort of tax exempt banner is owned by the charitable sector
can never be divested from the charitable sector.
So when they decided they were going to split the nonprofit
from the for-profit, they found that actually, legally,
they could not do that without divesting both the intellectual property
that the nonprofit owned, including all of the intellectual property
that was created, you know, that underlies the, you know,
Chatchee PT model and the equity stake that the nonprofit owned
in the for-profit company.
And so I think they looked at that price tag and they said,
that's not a price we're willing to pay.
And so instead of sort of splitting the nonprofit
from the for-profit, they decided to sort of continue
down this path of nonprofit ownership,
which in my mind is completely untenable, unsustainable,
and irreconcilable.
Basically, every day that OpenAI exists,
they are violating the law.
And actually what they're doing is just daring the attorney general
to hold them accountable for it.
I think they think they're too big to be held accountable
and they need the AG to assume that he will not win a case
and that's kind of what they've done.
They've loaded up on lawyers and they are making a bet
that the AG will not sort of pursue this in any way
that's actually meaningful.
Okay, so if I'm following you,
despite the fact that OpenAI has split itself into a for-profit arm
and a nonprofit arm, they're not for-profit mission
still overrides everything they do.
And because of that, they are violating California law
because there's no way that the nonprofit interests
are ever going to be primary in their business.
Right, I mean, I think as the kids would say,
they're playing in our faces.
I mean, they expect us to take their word that,
as they operate, as they make deals with the defense department
to develop autonomous weapons and surveillance systems
on American citizens, as they battle parents in court
whose children have committed suicide due to conversations
that these kids were having with their chatbots
and as they subpoena these parents for the list of people
who attended their children's memorial service
as part of those lawsuits,
they expect us to believe that the nonprofit mission
is being prioritized over the profit motivation of the company.
We all know that OpenAI's overriding priority
is to quote-unquote win the AI race.
It's to beat out the competition in the marketplace
and it's to establish the biggest AI company they can create.
And to the extent that the nonprofit mission
ever comes into tension with that,
the company will always prioritize profits over the mission.
But a law is only as good as its enforcement
and I think if there's one sort of rule of Silicon Valley,
it is to ask forgiveness and not permission
and breaking the law and screwing regulations
as part of the venture capital playbook.
I think they said, you know, this is worth it.
There's enough money on the line for us to just break the law
and do the PR work and a lobbying work
and the other work that we need to do
to ensure that these laws will never be enforced against us.
And when you talk about PR work, lobbying work,
are you talking about like saying
we're going to give away this $180 billion eventually?
Well, here's a thing.
They announced this week a list of priorities
that the foundation would be investing in.
They listed in one of their priorities, Alzheimer's research.
My mother is currently dying of Alzheimer's.
I have one copy of the gene that makes
puts me at extreme risk of developing Alzheimer's when I'm older.
So I pray every day that AI helps us find a solution
to Alzheimer's fast enough that I can benefit from it,
that my family can benefit from it.
And so I'm thrilled to see them make a commitment
to deploying AI to find cures to Alzheimer's
and other diseases.
But let me ask you a question.
What happens do you think if the research that's funded
by OpenAI's foundation finds
that actually Anthropics models are better at drug discovery
or scientific breakthroughs than Chatgbt
or any of OpenAI's other models?
What do you think happens then?
And what does it mean for the independence
of scientific research?
If all of this research is funded by an entity
that has an irreconcilable conflict of interest,
we would not accept the science around nicotine,
that tobacco companies were funding.
We do not accept the science around alcohol addiction,
that the alcohol companies fund.
We do not accept the science around sugared beverages
from the soda industry.
And we should not accept that this scientific research
is funded by an entity that has a vested financial interest
in the outcome.
And that is why it is so critically important
that the OpenAI Foundation actually be independent,
that it have an independent board,
that it can deploy its resources independently,
that the research that it is funding is independent.
And if you worry about, if you wonder about whether this is actually true,
you should ask any of the researchers
who were given access to Facebook's data
and ask what happened to them.
And they will tell you that it does not work to do research,
independent research that is funded by the tech industry
on the impact of the tech industry's own platforms.
Do you still think that we're maybe better off
that OpenAI says that they want to give billions away
to better society than say anthropic, you know, Google.
Maybe having some pledges to give money away,
but not nearly as much.
Is it still better that they want to give money away at all?
Well, Google has a corporate foundation.
It's called Google.org.
And I expect in this structure,
with the tension and the conflict of interest,
that the OpenAI Foundation has,
that it will operate much more like Google.org,
which is essentially an arm of the marketing department,
a corporate social responsibility program,
that sort of gives money to innocuous groups,
but will never do anything that undercuts Google's priorities.
And I think if you read between the lines of OpenAI's press release,
the work they say they want to continue doing
with community funding is all about convincing people
about the importance and value and benefit in using AI.
I mean, that's a market building opportunity for them.
That's not actually anything that's going to ensure that
AI is developed for the benefit of humanity.
And so, no, I don't think that they're going to operate
any differently than any of the other companies,
you know, corporate social responsibility arms.
That's essentially what they had built here.
This is the fight of our time.
AI is not inevitable.
The way it develops is not inevitable.
And we do not have to take these companies at their word
that they know best how to govern this technology.
We should have bigger imaginations about what's possible.
And if anything, this should give us more energy and motivation
to fix what's broken about our democracy,
than should just sit back and let billionaires control our future.
Do you ever talk to Sam Altman anymore?
He doesn't return my calls.
Well, thanks for talking to us.
I'm happy to, anytime.
Catherine Bracy, she loves tech,
but she also wants it to work better for the people.
She wrote a book all about her position.
It's called World Eaters.
How venture capital is cannibalizing the economy.
We reached out to OpenAI to ask what they thought about Catherine's argument
that they're openly breaking California nonprofit law,
but we didn't hear back yet.
Should we ask chat?
Let me ask chat.
That's an idea.
Okay, here we go.
Is OpenAI violating California nonprofit law?
It's not settled.
There are active allegations and legal challenges,
but no court has definitively ruled
that OpenAI is violating California's nonprofit law.
Huh.
Danielle Huett produced today,
Jolly Myers edited Patrick Boyd and David Tatashore,
mixed Andrea Lopez-Crusaba,
was on the fact check,
I'm Sean Rumsprim,
and this is today explained.
.
.



